Adaptive Model Predictive Control For A Class of Constrained Linear Systems Based On The Comparison Model

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Automatica 43 (2007) 301308

www.elsevier.com/locate/automatica
Brief paper
Adaptive model predictive control for a class of constrained linear systems
based on the comparison model

Hiroaki Fukushima
a
, Tae-Hyoung Kim
b,
, Toshiharu Sugie
b
a
Department of Mechanical Engineering and Intelligent Systems, University of Electro-Communications, Chofu, Tokyo 182-8585, Japan
b
Department of Systems Science, Graduate School of Informatics, Kyoto University, Uji, Kyoto 611-0011, Japan
Received 10 September 2004; received in revised form 4 December 2005; accepted 27 August 2006
Abstract
This paper proposes an adaptive model predictive control (MPC) algorithm for a class of constrained linear systems, which estimates system
parameters on-line and produces the control input satisfying input/state constraints for possible parameter estimation errors. The key idea is
to combine the robust MPC method based on the comparison model with an adaptive parameter estimation method suitable for MPC. To this
end, rst, a new parameter update method based on the moving horizon estimation is proposed, which allows to predict an estimation error
bound over the prediction horizon. Second, an adaptive MPC algorithm is developed by combining the on-line parameter estimation with an
MPC method based on the comparison model, suitably modied to cope with the time-varying case. This method guarantees feasibility and
stability of the closed-loop system in the presence of state/input constraints. A numerical example is given to demonstrate its effectiveness.
2006 Elsevier Ltd. All rights reserved.
Keywords: Model predictive control; Adaptive estimation; Constrained systems; Robust stability; Comparison principle
1. Introduction
Model predictive control (MPC) is one of the most promis-
ing ways to handle control problems for systems having input
and/or state constraints (see e.g. Mayne et al., 2000; Morari and
Lee, 1999; Rawlings, 2000). The model quality plays a vital
role in MPC, but in reality there always exist model uncertain-
ties, which may signicantly degrade the system performance.
One way to cope with such problems is to develop robust
MPC methods, which guarantee a certain control performance
against model uncertainties. This type of MPC has been
thoroughly investigated for many years (see e.g. Badgwell,
1997; Kothare et al., 1996; Lee and Kouvaritakis, 2000;
Michalska and Mayne, 1993; Scokaert and Mayne, 1998).

This paper was not presented at any IFAC meeting. This paper was
recommended for publication in revised form by Associate Editor Gang Tao
under the direction of Editor Miroslav Krstic.

Corresponding author. Tel.: +81 774 38 3952; fax: +81 774 38 3945.
E-mail addresses: [email protected] (H. Fukushima),
[email protected] (T.-H. Kim),
[email protected] (T. Sugie).
0005-1098/$ - see front matter 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.automatica.2006.08.026
In this line of research, the model is xed though its uncertain-
ties are taken explicitly into account. Therefore, its control per-
formance is limited by the quality of the xed (initial) model.
Another attractive way to handle model uncertainties is to
update the model on-line based on measurement data. Although
the development of adaptive-type MPC schemes is one of the
research issues for control of constrained systems, there have
been few reports on this topic so far (Mayne et al., 2000). One
of the main reasons is the difculty to guarantee the fulllment
of constraints in the presence of an adaptive mechanism. In
order to overcome this problem, the future behavior of the real
system must be predicted while updating the system parameters
on-line. In addition, it seems extremely difcult to guarantee
both feasibility and stability theoretically whenever an adaptive
approach to MPC is adopted.
The purpose of this paper is to develop an adaptive MPC
method for a class of constrained linear time-invariant sys-
tems, which estimates system parameters on-line and produces
the control input satisfying state/input constraints for possible
parameter estimation errors. The key idea is to combine the ro-
bust MPC method based on the comparison model (Fukushima
and Bitmead, 2005) with an adaptive parameter estimation
302 H. Fukushima et al. / Automatica 43 (2007) 301308
method suitable for MPC. To this end, a new parameter estima-
tion method is rst proposed based on moving horizon estima-
tion. This method allows to obtain a less conservative prediction
error bound than conventional adaptive estimation algorithms
(see e.g. Ioannou and Sun, 1996; Krsti c et al., 1995) by taking
into account the future model improvement. Then, the pro-
posed estimation method is incorporated into a robust MPC
method (Fukushima and Bitmead, 2005), which can handle
state-dependent disturbances, by modifying the comparison
model to handle time-varying parameter estimation errors. In
addition, it is shown that feasibility and stability are guaran-
teed under certain conditions. Finally, a numerical example is
given to demonstrate the effectiveness of the proposed method.
Notice that only the noise-free case is considered in this paper.
In this paper, the following notation will be used: x
i
denotes
the ith entry of a vector x. Let x and x

denote the
Euclidean and -norms of a vector x, respectively. Let o(M)
denote the largest singular value of a matrix M. The maximum
and minimum eigenvalues of a matrix M are denoted by z(M)
and z(M), respectively.
2. Problem formulation
Consider the following linear time-invariant system in con-
trollable canonical form:
x(t ) = A(0

)x(t ) + Bu(t ), x(0) = x


0
, (1)
A(0

) =

a
1
a
2
a
n
I
n1
| 0

, B =

1
0

, (2)
where 0

= [a
1
, a
2
, . . . , a
n
]
T
denotes the uncertain parameter
vector, I
n1
is the (n 1) (n 1) identity matrix and 0 is
the zero vector of appropriate dimension. The constraints for
the measurable state x(t ) R
n
and the control input u(t ) R
are given as
x(t ) X, X := {x R
n
: |x
i
|
i
, i},
u(t ) U, U := {u R : |u| p} (3)
for t 0. It is assumed that a given initial estimate 0
0
of 0

and
an error bound v
0
satisfy
0
0
0

v
0
. (4)
It is also assumed that a state feedback gain K
0
and a sym-
metric positive denite matrix P R
nn
, both of which are
given in advance, satisfy similar assumptions to those used in
Fukushima and Bitmead (2005), i.e.,
Assumption 1.
1

z(P) min
i

i
, K
0

z(P)p.
Assumption 2.
: := z(Q) 2v
0
o(PB) >0,
Q := (F
T
P + PF), F := A(0
0
) + BK
0
.
Note that Assumption 1 implies that the given feedback con-
trol u = K
0
x always satises the constraints in the following
ellipsoidal set:
X
f
:= {x R
n
: V(x)1}, V(x) :=

x
T
Px. (5)
Assumption 2, on the other hand, implies that X
f
is a robustly
invariant set under the control law u = K
0
x (Blanchini, 1999;
Fukushima and Bitmead, 2005).
Under these assumptions, our goal is to construct an adaptive
MPC algorithm, which obtains an estimate 0(t ) of 0

on-line
and steers the state x to the origin without violating the given
constraints in (3).
3. Adaptive parameter estimation algorithm
In order to design an adaptive MPCcontroller, it is required to
develop an appropriate adaptive parameter estimation method,
by which the future model improvement can be taken into
account explicitly. In this section, a new adaptive parameter es-
timation algorithm for MPC is proposed. It enables to predict
an estimation error bound over the prediction horizon of MPC
explicitly. This key feature allows to develop an adaptive MPC
algorithm based on a robust MPC method (Fukushima and Bit-
mead, 2005) as shown in the following sections.
To develop an adaptive parameter estimation algorithm, the
following quantities are rst dened:
[(t ) :=
1
(s)
x(t ), z(t ) :=
1
(s)
( x
1
(t ) u(t )). (6)
Here s is the differential operator and (s) is an Hurwitz poly-
nomial given by the designer. Note that (6) yields
z(t ) = [
T
(t )0

(7)
from (1). The proposed method obtains an estimate of 0

based
on the following matrices:
F
[
(t ) :=

t
t T
e
[(r)[
T
(r) dr, F
z
(t ) :=

t
t T
e
[(r)z(r) dr,
(8)
where [(t ) := 0, z(t ) := 0 for t <0 and T
e
is the estimation
horizon chosen by the designer. Next, the adaptive parameter
estimation algorithm will be described. It is assumed that the
parameters will be updated at each sampling instant t
i
:= io
(i = 0, 1, . . .) where o is the sampling interval. Let 0
n
denote
the n n zero matrix.
Adaptive parameter estimation algorithm.
Step 0: At time t = 0, let i = 0, i.e., t
0
= 0. Then, initialize
R, c
[
R
nn
and c
z
R
n
as follows:
(t
0
) = 0, c
[
(t
0
) = 0
n
, c
z
(t
0
) = 0.
Step 1: Apply the following parameter update law:

0(t ) = (c
z
(t
i
) c
[
(t
i
)0(t )) (9)
H. Fukushima et al. / Automatica 43 (2007) 301308 303
for t [t
i
, t
i+1
), where >0 denotes an adaptive gain given
by the designer. If t = t
i+1
, then go to Step 2.
Step 2: Let i = i + 1. Then, update , c
[
and c
z
as
(t
i
) = z(F
[
(t
i
)), c
[
(t
i
) = F
[
(t
i
), c
z
(t
i
) = F
z
(t
i
),
if the following condition is satised:
z(F
[
(t
i
))(t
i1
). (10)
Otherwise, no update is performed, i.e.,
(t
i
) = (t
i1
), c
[
(t
i
) = c
[
(t
i1
), c
z
(t
i
) = c
z
(t
i1
).
Then, go to Step 1.
It is important to note that one of the differences from the
conventional update law (see (23) given later) is to use the
integrals F
[
and F
z
over the estimation horizon as in (9).
Another difference is that (10) in Step 2 aims at choosing the
best data set for parameter estimation in terms of the excitation
of [(t ) over the horizon. In other words, the excitation of [(t )
is evaluated by z(F
[
(t )) and then the data set, which maxi-
mizes z(F
[
(t
j
)) (j = 0, 1, . . . , i), is selected. It can be seen
from Step 2 in the estimation algorithm that (t
i
) satises
(t
i
) = max
0t
j
t
i
z(F
[
(t
j
)). (11)
Thus, c
[
and c
z
, updated using in Step 2, satisfy
c
[
(t
i
) = F
[
(t

i
), c
z
(t
i
) = F
z
(t

i
), (12)
t

i
:= arg max
0t
j
t
i
z(F
[
(t
j
)). (13)
For the parameter estimation error

0 := 0 0

, the future
value of

0(t), t [t, t + T ], is obviously unknown, where


T is the prediction horizon in MPC. However, an upper bound
v(t|t
i
), which denotes a value predicted at t
i
, can be predicted,
as shown in the following lemma.
Lemma 1. Let the current time be t =t
i
, then a future estimation
error bound v(t|t
i
), which satises

0(t)v(t|t
i
), t [t
i
, t
i
+ T ], (14)
is predicted by
v(t|t
i
) = v(t
i
) e
(t
i
)(tt
i
)
, (15)
where v(t
i
) is set as
v(t
i
) =

v
0
, i = 0,
v(t
i
|t
i1
), i 1
(16)
and satises

0(t
i
)v(t
i
).
Proof. From (7) to (8), it holds that F
z
(t ) = F
[
(t )0

. Thus, it
can be seen from (9) and (12) that

0(t ) = F
[
(t

i
)

0(t ). (17)
Therefore, for =(

0) :=

0, it follows from (11) that


=(

0) =

0
T
F
[
(t

i
)

0 =(

0)z(F
[
(t

i
)). (18)
Thus, (14) is proved by the comparison principle (Miller and
Michel, 1982). Further, it is easily seen from (4) and (14) that

0(t
i
)v(t
i
).
This result shows that one can take into account the future
improvement of 0(t ) by using v(t|t
i
) in the robust MPCmethod.
Moreover, (11) and (15) show that the proposed algorithm tries
to choose the best estimate 0(t ) in the sense that the decay rate
(t
i
) of the predicted error bound v(t|t
i
) is maximized.
Similarly to the existing robust MPC methods (Fukushima
and Bitmead, 2005; Lee and Kouvaritakis, 1999, 2000), the
feedback control law
u(t ) = K(0(t ))x(t ) + u(t ), (19)
which consists of a feedback gain K(0(t )) and an open-loop
trajectory u(t ), is adopted. The key difference from other robust
MPC approaches is that K(0(t )) is updated at each time instant.
In this paper, the following simple feedback law is adopted
as a starting point for adaptive MPC ensuring feasibility and
stability:
K(0(t )) := 0
T
(t ) + 0
T
0
+ K
0
. (20)
By substituting (19) into (1), it holds that
x(t ) = Fx(t ) + B u(t ) Bd(t ), d(t ) :=

0
T
(t )x(t ). (21)
Note that the future upper bound v(t|t
i
) of

0(t), t [t
i
, t
i
+
T ], is known as shown in Lemma 1. Thus, the disturbance d(t),
t [t
i
, t
i
+T ], the effect of parameter uncertainty, is bounded
as follows:
d(t) D, D := {d R: |d(t)| v(t|t
i
)x(t)}. (22)
Thus, the robust MPC method proposed by Fukushima and
Bitmead (2005) is applicable to the system in (21).
Remark 1. The conventional parameter estimation method
(see e.g. Ioannou and Sun, 1996; Krsti c et al., 1995), such as

0 = [c, c = z [
T
0, (23)
could be incorporated into a robust MPC method. In this case,
c
[
= [[
T
in (9). However, this method makes the estimation
error bound v too conservative in the sense that the possible
model improvement in the future is ignored. In other words, v is
xed over the prediction horizon of MPC, although the estima-
tion error

0 could be decreased by the parameter estimation


over the horizon.
4. Modied robust MPC algorithm
This section describes a robust MPC method, which is mod-
ied to be combined with the adaptive parameter estimation
method of Section 3. At each time instant t =t
i
, i =0, 1, 2, . . . ,
304 H. Fukushima et al. / Automatica 43 (2007) 301308
this MPC controller determines the control input which guaran-
tees (3). To this end, the following notation is rst introduced:
x(t|t ), t [t, t +T ], denotes the predicted value of x(t) at time
instant t; 0
t
(t), t [t, t +T ], denotes the estimated parameter
vector based on (9) at time instant t.
Next the state x(t), t [t, t + T ], is predicted based on the
nominal model as

x(t|t ) = A(0
t
(t)) x(t|t ) + B u(t|t ), x(t |t ) = x(t ). (24)
In (24), u(t|t ) denotes the future control predicted at the current
time instant t and has the following form, as mentioned in
Section 3:
u(t|t ) = K(0
t
(t)) x(t|t ) + u(t|t ), (25)
where u(t|t ) denotes the future feedforward control predicted
at time instant t. Substituting the control law (25) into (24)
results in the following equation:

x(t|t ) = F x(t|t ) + B u(t|t ), x(t |t ) = x(t ). (26)


For the robustness analysis of the closed-loop system, it is
necessary to evaluate the prediction error of x due to the dis-
turbance d(t ), caused by the parameter estimation error. More-
over, the constraints for x and u should be derived, such that the
constraints (3) are satised for the real system. However, since
the disturbance d(t) D depends also on the state x(t) of the
real system as in (21), it is difcult to evaluate the prediction
error only from the nominal model in (26).
In order to overcome this difculty, the following additional
scalar system, called the comparison model, is introduced in
the optimization problem of MPC. This system is constructed
based on a priori information on the estimation error bound in
(15) in a similar way to the method in Fukushima and Bitmead
(2005) as
w(t|t ) = a(t|t )w(t|t ) + b| u(t|t )|, w(t |t ) = V(x(t )),
a(t|t ) :=
z(Q) 2v(t|t )o(PB)
2z(P)
, b := P
1/2
B, (27)
where t [t, t + T ]. This comparison model allows to obtain
an upper bound of the future value of V(x), as shown in the
following lemma.
Lemma 2. For any u(t|t ), t [t, t +T ], the comparison model
in (27) and the real system in (21) satisfy
V(x(t))w(t|t ), t [t, t + T ]. (28)
Proof. Since V(x) =

x
T
Px, it holds that

V(x) =
1
2V(x)
(x
T
(F
T
P + PF)x + 2x
T
PB( u d))

x
2V(x)
(z(Q)x 2o(PB)|d|)
+
P
1/2
x
V(x)
P
1/2
B u.
It follows from d(t) D and V(x)

z(P)x that

V(x)
z(Q) 2v(t|t )o(PB)
2z(P)
V(x) + P
1/2
B| u|.
Thus, (28) is shown by the comparison principle.
Once the model for evaluating V(x) is found, the constraint
sets

U and

X for u and x, which depend on v(t ) and w(t ), can
be derived as follows:

U(t, v, w) := {K
0
x + u R: |K
0
x + u| p p(t, v, w)},

X(t, v, w) := { x R
n
: | x
i
|
i

i
(t, v, w), i}, (29)
where

i
(t, v, w) :=

tt
0
|
i
(r)|
v(t r|t )w(t r|t )

z(P)
dr,
p(t, v, w) := (v(t|t ) + v
0
)(2 p
1
+ p
3
) + p
2
,
p
1
(t, v, w) :=

tt
0
(r)
v(t r|t )w(t r|t )

z(P)
dr,
p
2
(t, v, w) :=

tt
0
|(r)|
v(t r|t )w(t r|t )

z(P)
dr,
p
3
(t, w) :=
w(t|t )

z(P)
, (r) := e
Fr
B, (r) := K
0
e
Fr
B,
and
i
(r) denotes the ith row of (r). The modied constraint
sets

X and

U satisfy the following property.
Theorem 1. For the nominal model (26) with given x(t ) R
n
,
any trajectory u(t|t ), t [t, t + T ], satisfying
x(t|t )

X(t, v(|t ), w(|t )),
K
0
x(t|t ) + u(t|t )

U(t, v(|t ), w(|t )), (30)
satises the constraints (3) for the real system (21), i.e.,
x(t) X, K(0
t
(t))x(t) + u(t|t ) U (31)
are guaranteed for all possible d(t) D, t [t, t + T ].
Proof. From (21) and (26), it holds that
x(t) = e
F(tt )
x(t )
+

tt
0
e
Fr
B( u(t r|t ) d(t r|t )) dr, (32)
x(t|t ) = e
F(tt )
x(t ) +

tt
0
e
Fr
B u(t r|t ) dr. (33)
Therefore, it follows from (22) and Lemma 2 that
|x
i
(t) x
i
(t|t )|

tt
0
|
i
(r)d(t r|t )| dr

tt
0
|
i
(r)|
v(t r|t )w(t r|t )

z(P)
dr =

i
, (34)
H. Fukushima et al. / Automatica 43 (2007) 301308 305
which implies |x
i
(t)| | x
i
(t|t )| +

i
from (29). Thus, any
u(t|t ), satisfying x(t|t )

X(t, v(|t ), w(|t )) in (30), satises
x(t) X in (31) for all d(t) D, t [t, t + T ]. Similarly,
from (19), (20) and (25)
|u(t) u(t|t )|

tt
0
|K
0
e
Fr
B||d(t r|t )| dr
+ (v(t|t ) + v
0
)

tt
0
e
Fr
B
|d(t r|t )| dr. (35)
Here, it holds from (22) and Lemma 2 that
x(t)
w(t|t )

z(P)
, |d(t|t )|
v(t|t )w(t|t )

z(P)
. (36)
Thus, from the denitions of p
1
and p
2
,
|u(t) u(t|t )| (v(t|t ) + v
0
) p
1
+ p
2
. (37)
Also, since it holds from (32) and (36) that
|(0
T
t
(t) + 0
T
0
) x(t|t )| (v(t|t ) + v
0
) x(t|t )
(v(t|t ) + v
0
)

x(t) +

tt
0
e
Fr
B|d(t r|t )| dr

it follows from (20) and (25) that


| u(t|t )| |K
0
x(t|t ) + u(t|t )| + |(0
T
t
(t) + 0
T
0
) x(t|t )|,
|K
0
x(t|t ) + u(t|t )| + (v(t|t ) + v
0
)( p
1
+ p
3
). (38)
Furthermore, from (37), (38) and (29),
|u(t|t )| | u(t|t )| + |u(t) u(t|t )|
|K
0
x(t|t ) + u(t|t )| + (v(t|t ) + v
0
)(2 p
1
+ p
3
) + p
2
.
This implies that any u(t|t ) satisfying (30) also satises (31)
for all possible d(t) D, t [t, t + T ].
The above theorem shows a sufcient condition for u, under
which the proposed MPC controller satises the constraints in
(3) for all t 0.
Based on the above results, the optimization problem for the
proposed adaptive MPCmethod is described for given constants
R>0 and c ( max{V(x
0
), 1}).
Optimization problem for MPC.
min
u
J(x(t ), u(|t )) :=

t +T
t
u
T
(r|t )R u(r|t ) dr (39)
subject to (15), (26), (27), (30) and
w(t|t )c, w(t + T |t )1, t [t, t + T ]. (40)
Note that the role of c and the conditions, which c should
satisfy, will be mentioned in detail in Section 5. In (39)(40),
a nite horizon constrained optimization problem without the
disturbance term is solved using the measured state x(t ) at the
current time instant t. It is easily veried from Theorem 1 that,
if the problem in (39) is feasible at each time instant, the given
constraints in (3) are always satised.
Remark 2. The constraint given in (27) is nonlinear with re-
spect to u(t|t ). By introducing a new variable ,(t|t ) R, the
constraint (27) is modied to
w(t|t ) = a(t|t )w(t|t ) + b,(t|t ),
| u(t|t )| ,(t|t ), w(t |t ) = V(x(t )), (41)
and the cost function J(x(t ), u(|t )) is modied to J(x(t ),
,(|t )). Therefore, the above problemhas only linear constraints
and can be reduced by discretization to a quadratic program-
ming (QP) problem with free variables u(t|t ) and ,(t|t ).
The algorithm iterating the parameter update in (9) and the
modied robust MPC given in this section form the adaptive
MPC method proposed in this paper.
5. Feasibility and stability
In the proposed optimization problem for MPC, the addi-
tional constraint (40) for w(t|t ), t [t, t + T ], is introduced
to guarantee the feasibility at each time instant. The feasibil-
ity means that there exists u which satises the constraints
(15), (26), (27), (30) and (40) of the optimization problem (39)
with a nite value of J(x(t ), u(|t )). The constant c is a num-
ber satisfying c max{V(x
0
), 1} and the terminal condition
w(t + T |t )1 guarantees x(t + T ) X
f
for the real system.
Although c is desired to be as large as possible for the feasibil-
ity at the current time instant, the value of c should be bounded
to guarantee feasibility at the next time instant as shown in the
following assumption.
Assumption 3.
cv
0
c

z(P) min
i

i
1,
cv
0
(c

+ 2)p

z(P) K
0
(42)
for the given c ( max{V(x
0
), 1}) in (40), where
c

:=

T
0
(r)

dr, c

:=

T
0
(|(r)| + 4v
0
(r)) dr.
Note that Assumption 3 is a sufcient condition for
Assumption 1. If Assumption 3 cannot be satised for any
c max{V(x
0
), 1}, one needs to choose a smaller terminal
set X
f
or modify the initial feedback gain K
0
. The following
theorem describes the properties of feasibility and stability of
the proposed MPC method.
Theorem 2. Suppose that Assumption 3 is satised and the
optimization problem for MPC is feasible at t = 0, i.e., there
exists u minimizing (39) subject to (15), (26), (27), (30) and
306 H. Fukushima et al. / Automatica 43 (2007) 301308
(40) at t =0. Then, the proposed MPC method has the following
properties.
(i) The optimization problem for MPC is feasible at all time
instant t = io where i = 1, 2, . . . .
(ii) The state x(t ) of the real system converges to the origin
as t .
Note that the feasible solution u of the optimization problem
for MPC guarantees that the constraints (3) for the real system
are satised, since the condition (30) in Theorem 1 is always
satised. The following lemma is a key result to prove Theorem
2 and means that, if the proposed MPC problem is feasible at
time t, then it is also feasible at the next time instant. Its proof
can be found in Fukushima et al. (2006).
Lemma 3. Suppose that Assumption 3 is satised and the op-
timization problem for MPC is feasible at the current time in-
stant t. Then, at the next time instant t + o,
u(t|t + o) =

(t|t ), t [t + o, t + T ],
0, t [t + T, t + T + o],
(43)
where u

(t|t ) denotes the optimal solution determined at time


instant t, is one of the feasible solutions of the optimization
problem for MPC.
Proof of Theorem 2. First, (i) will be proved by induction. The
optimization problemis feasible at t =0 by assumption. Assume
now it is feasible at each time instant t = io (i = 1, . . . , k).
Then, since Lemma 3 shows that the control in (43) is feasible
at t = (k + 1)o, (i) is proved. In order to prove (ii), it will be
shown that the optimal cost J(x(t ), u

) is nonincreasing. At
the time instant t + o, the feasible solution in (43) satises
J(x(t + o), u(|t + o)) J(x(t ), u

(|t ))
=

t +o
t
u
T
(r|t )R u

(r|t ) dr 0. (44)
It also holds that
J(x(t + o), u

(|t + o))J(x(t + o), u(|t + o)) (45)


from the optimality of J(x(t +o), u

(|t +o)). From (44) and


(45), the optimal cost is nonincreasing, i.e.,
J(x(t + o), u

(|t + o))J(x(t ), u

(|t )). (46)


Since the optimal cost is nonincreasing and bounded by 0 from
below, it satises J(x(t ), u

(|t )) c as t for a constant


c0. This implies that, for each c >0, there exists t
c
>0 such
that
0J(x(t ), u

(|t )) J(x(t + o), u

(|t + o)) <c (47)


for t t
c
. But, from (44) to (45), it holds that

t +o
t
u
T
(r|t )R u

(r|t ) dr
J(x(t ), u

(|t )) J(x(t + o), u

(|t + o)). (48)


Thus, (47)(48) imply u

(t |t ) 0 as t . Therefore, one
can choose t
c
, which satises | u

(t |t )| c
c
, t t
c
, for any
c
c
>0. This implies from Lemma 2 that

V(x(t )) a(t
c
|t
c
)V(x(t )) + b| u

(t |t )|
a

V(x(t )) + b

, t t
c
, (49)
where b

:= bc
c
and a

:= a(t
c
|t
c
). Therefore, from the com-
parison principle,
V(x(t))e
a

(tt
c
)
V(x(t
c
)) + b

tt
c
0
e
a

r
dr
e
a

(tt
c
)

V(x(t
c
))
b

+
b

(50)
and the right-hand side of (50) converges to b

/a

as t .
Therefore x(t ) 0 as t , since b

:= bc
c
can be chosen
arbitrarily small.
6. Numerical example
Consider a second order system in controllable canonical
form with 0

= [0.5, 1]
T
, x
0
= [1, 0.2]
T
in (1). Let the
initial estimate of 0

and its estimation error bound be 0


0
=
[0.75, 0.8]
T
and v
0
= 0.321, respectively. The state and in-
put constraints are given as |x
i
(t )| 1 (i =1, 2) and |u(t )| 4,
respectively. An initial feedback gain K
0
and a matrix P are
chosen as
K
0
= [5.07 4.06], P =

0.302 0.307
0.307 2.31

.
Further, the control horizon and the terminal set are selected as
T = 10 s and X
f
= {x R
n
: V(x)0.7}, respectively. For the
adaptive parameter estimation algorithm, the estimation hori-
zon is set to T
e
= 3 s and the adaptive gain to = 2. The
proposed adaptive MPC method is applied to the system, dis-
cretized with sampling time 0.1 s. Fig. 1 shows the predicted
0 1 2 3 4 5 6
0
0.05
0.1
0.15
0.2
0.25
0.3
Prediction Horizon [sec]
P
a
r
a
m
e
t
e
r

e
s
t
i
m
a
t
i
o
n

e
r
r
o
r

b
o
u
n
d

|
t
)
( |t), t=0.1[sec]
( |t), t=0.2[sec]
( |t), t=0.3[sec]
( |t), t=0.6[sec]
( |t), t=1.0[sec]
Fig. 1. v(t|t ) over the prediction horizon.
H. Fukushima et al. / Automatica 43 (2007) 301308 307
0.5
0.6
0.7

1
(
t
)

2
(
t
)
0 1 2 3 4 5
0.8
0.9
1.0
Time [sec]

1
(t)

2
(t)
6

2
*

1
*
Fig. 2. Time plots of estimated parameters 0
1
(t ) and 0
2
(t ).
0 1 2 3 4 5
0
1
2
3
4
5
6
Time [sec]
I
n
p
u
t
Upper Bound
Predicted Control at t=0
Applied Control Input
Upper bound used at t=0.0[sec]
t=0.1[sec]
t=0.2[sec]
6
Fig. 3. Time trajectory of control input u(t ).
error bound v(t|t ) in (15). Since the decay rate is maximized
at each time instant as shown in Section 3, the error bound
v(t|t ) based on (15) decays more rapidly for larger t. Fig. 2
shows the convergence of the estimated parameters to their true
values. In Fig. 3, the solid line shows the applied control tra-
jectory u(t ), and the dashed line shows the predicted trajectory
K
0
x(t|t ) + u(t|t ) at t = 0, which is obtained by solving the
optimization problem (39) in Section 4. This shows that the
control input, obtained by the proposed adaptive MPC method,
satises the given constraints. The dotted lines show the upper
bounds calculated based on the constraint set

U in (29) at time
instant t =0, 0.1 and 0.2 s. These lines show that the maximum
values of applicable control inputs become larger as t increases,
since the robustness margin p in (29) for the control constraint
becomes less conservative.
7. Conclusion
In this paper, an adaptive MPC algorithm has been proposed
for a class of constrained linear systems having uncertain pa-
rameters. The proposed method updates the estimates of sys-
tem parameters on-line and produces a control input satisfying
the given state/input constraints. To construct such an adaptive
MPC method, rst, a new parameter update algorithm was pro-
posed based on moving horizon estimation. It allows to predict
an estimation error bound over the given prediction horizon.
Then, the estimation algorithm was combined with a robust
MPC method based on the comparison model, suitably modi-
ed to cope with the time-varying case. Furthermore, it has been
shown that the proposed algorithm guarantees feasibility and
stability of the closed-loop system in the presence of input/state
constraints. To the best of the authors knowledge, adaptive
MPC regulation for systems with state/input constraints has not
been fully addressed in the literature and results of guaranteed
stability/feasibility are not available. This paper provides re-
sults in this direction. However, this is only the rst step and a
lot of work must be done in the future.
References
Badgwell, T. A. (1997). Robust model predictive control of stable linear
systems. International Journal of Control, 68(4), 797818.
Blanchini, F. (1999). Set invariance in control. Automatica, 35(11),
17471767.
Fukushima, H., & Bitmead, R. R. (2005). Robust constrained predictive
control using comparison model. Automatica, 41(1), 97106.
Fukushima, H., Kim, T. H., & Sugie, T. (2006). Adaptive model
predictive control for a class of constrained linear systems based on
the comparison model: Proof of Lemma 3. Preprint available from
http://www.robot.kuass.kyoto-u.ac.jp/J-appendix/AMPC.pdf.
Ioannou, P. A., & Sun, J. (1996). Robust adaptive control. Englewood Cliffs,
NJ: Prentice-Hall.
Kothare, M. V., Balakrishnan, V., & Morari, M. (1996). Robust constrained
model predictive control using linear matrix inequalities. Automatica,
32(10), 13611379.
Krsti c, M., Kanellakopoulos, I., & Kokotovi c, P. (1995). Nonlinear and
adaptive control design. New York: Wiley-Interscience.
Lee, Y. I., & Kouvaritakis, B. (1999). Constrained receding horizon predictive
control for systems with disturbances. International Journal of Control,
72(11), 10271032.
Lee, Y. I., & Kouvaritakis, B. (2000). Robust receding horizon predictive
control for systems with uncertain dynamics and input saturation.
Automatica, 36(10), 14971504.
Mayne, D. Q., Rawlings, J. B., Rao, C. V., & Scokaert, P. O. M. (2000).
Constrained model predictive control: Stability and optimality. Automatica,
36(6), 789814.
Michalska, H., & Mayne, D. Q. (1993). Robust receding horizon control of
constrained nonlinear systems. IEEE Transactions on Automatic Control,
38(11), 16231632.
Miller, R. K., & Michel, A. N. (1982). Ordinary differential equations. New
York: Academic Press.
Morari, M., & Lee, J. H. (1999). Model predictive control: past, present and
future. Computers and Chemical Engineering, 23, 667682.
Rawlings, J. B. (2000). Tutorial overview of model predictive control. IEEE
Control System Magazine, 20(3), 3852.
Scokaert, P. O. M., & Mayne, D. Q. (1998). Minmax feedback model
predictive control for constrained linear systems. IEEE Transactions on
Automatic Control, 43(8), 11361142.
308 H. Fukushima et al. / Automatica 43 (2007) 301308
Hiroaki Fukushima received the B.S. and M.S.
degrees in engineering and Ph.D. degree in in-
formatics from Kyoto University, Japan, in 1995,
1998 and 2001, respectively. From 1999 to 2004
he was a Research Fellow of Japan Society for
the Promotion of Science. From 2001 to 2003
he worked as a visiting scholar at the Univer-
sity of California, San Diego. Currently he is
a Research Associate of University of Electro-
Communications, Japan. His research interests
include system identication and robust control.
Tae-Hyoung Kim was born in Seoul, Korea.
He received the B.S. and M.S. degrees in me-
chanical engineering from Chung-Ang Univer-
sity, Korea, in 1999 and 2001, respectively. He
received the Ph.D. degree in informatics from
Kyoto University, Japan, 2006. He is currently
a visiting research associate in the Department
of Systems Science, Kyoto University. His cur-
rent research interests include model predictive
control, iterative learning control, cooperative
control and system identication.
Toshiharu Sugie was born in Osaka, Japan.
He received his B.S., M.S. and Ph.D. degrees
in engineering from Kyoto University, Kyoto,
Japan, in 1976, 1978 and 1985, respectively.
From 1978 to 1980, he was a Research Member
of the Musashino Electric Communication Lab-
oratory in NTT, Musashino, Japan. From 1984
to 1988, he was a Research Associate of Depart-
ment of Mechanical Engineering at University
of Osaka Prefecture, Osaka. In 1988, he joined
Kyoto University, where he is currently a Pro-
fessor of Department of Systems Science. His
current research interests include robust control,
nonlinear control, and system identication.

You might also like