Camacho 2007
Camacho 2007
where y(t) ∈ Y and u(t) ∈ U are n and m vectors of outputs and inputs,
ψ ∈ Ψ is a vector of parameters possibly unknown, and z(t) ∈ Z is a vector
of possibly random variables.
Now consider the model or family of models, for the process described
by
ŷ(t + 1) = fˆ(y(t), · · · , y(t − nna ), u(t), · · · , u(t − nnb ), θ) (8.2)
where ŷ(t + 1) is the prediction of output vector for instant t + 1 generated
by the model; fˆ is a vector function, usually a simplification of f ; nna and
nnb are the number of past outputs and inputs considered by the model; and
θ ∈ Θ is a vector of uncertainties about the plant. Variables that although
influencing plant dynamics are not considered in the model because of the
necessary simplifications or for other reasons are represented by z(t).
The dynamics of the plant in (8.1) are completely described by the family
of models (8.2) if for any y(t), · · · , y(t − ny ) ∈ Y, u(t), · · · , u(t − nu ) ∈ U,
z(t), · · · , z(t − nz ) ∈ Z and ψ ∈ Ψ, there is a vector of parameters θi ∈ Θ
such that
8.1 Process Models and Uncertainties 219
The way in which the uncertainties parameter θ and its domain Θ are defined
mainly depends on the structures of f and fˆ and on the degree of certainty
about the model. In the following the most used model structures in MPC
will be considered.
For an m-input n-output MIMO stable plant the truncated impulse response
is given by N real matrices (n × m) Ht . The (i, j) entry of Ht corresponds
to the ith output of the plant when an impulse has been applied to input
variable uj .
The natural way of considering uncertainties is by supposing that the
coefficients of the truncated impulse response, which can be measured ex-
perimentally, are not known exactly and are a function of the uncertainty
parameters. Different types of functions can be used. The most general way
will be by considering that the impulse response may be within a set defined
by (H t )ij ≤ (Ht )ij ≤ (H t )ij ; that is, (Ht )ij (Θ) = (Hmt )ij + Θtij , with Θ de-
fined by (Hmt )ij − (H t )ij ≤ Θtij ≤ (H t )ij − (Hmt )ij and Hmt is the nominal
response. The dimension of the uncertainty parameter vector is N × (m × n).
For the case of N = 40 and a 5-input 5-output MIMO plant, the number of
uncertainty parameters is 1000, which will normally be too high for the min-
max problem involved.
This way of modelling does not take into account the possible structures
of the uncertainties. When these are considered, the dimension of the uncer-
tainty parameter set may be considerably reduced.
In [47] and [162] a linear function of the uncertainty parameters is sug-
gested:
q
Ht = Gtj θj
j=1
n
m
Ht (θ) = (H t )ij + Θtij Hij
i=1 j=1
where Hij is a matrix with entry (i, j) equal to one and the remaining entries
are zero.
The predicted output can be computed as
N
y(t + j) = (Hmi + θi )u(t + j − i)
i=1
N
ym(t + j) = Hmi u(t + j − i)
i=1
The prediction band around the nominal response is then limited by:
N
N
min θi u(t + j − i) and max θi u(t + j − i)
θ∈Θ θ∈Θ
i=1 i=1
about the dead time is higher than the sampling time used, it will translate
into a change in the order of the polynomial or coefficients that can change
from zero and to zero. If the uncertainty band about the dead time is smaller
than the sampling time, the pure delay time of the discrete-time model does
not have to be changed. The fractional delay time can be modelled by the
first terms of a Padé expansion and the uncertainty bound of these coeffi-
cients can be calculated from the uncertainties of the dead time. In any case
dead time uncertainty bounds tend to translate into a very high degree of
uncertainty about the coefficients of the polynomial matrix B(z −1 ).
The prediction equations can be expressed in terms of the uncertainty
parameters. Unfortunately, for the general case, the resulting expressions are
too complicated and of little use because the involved min-max problem
would be too difficult to solve in real time. If the uncertainties only affect
polynomial matrix B(z −1 ), the prediction equation is an affine function of
the uncertainty parameter and the resulting min-max problem is less com-
putationally expensive, as will be shown later in the chapter. Uncertainties
on B(z −1 ) can be given in various ways. The most general way is by con-
sidering uncertainties on the matrices (Bi = Bni + θi ). If the plant can be
described by a linear combination of q known linear time invariant plants
with unknown weighting θj , polynomial matrix B(z −1 ) can be expressed as:
q
B(z −1 ) = θi Pi (z −1 )
i=1
with dim(θ(t))=n.
Notice that global uncertainties can be related to other types of uncer-
tainties. For the impulse response model, the output at instant t + j with
parametric and temporal uncertainties description is given by:
N
ŷ(t + j) = (Hmi + θi )u(t + j − i)
i=1
N
ŷ(t + j) = (Hmi )u(t + j − i) + θ(t + j)
i=1
N
Therefore θ(t + j) = i=0 θi u(t + j − i) and the limits for the i component
(θi , θi ) of vector θ(t + j) when u(t) is bounded (in practice always) are given
by:
N
θi = min θti u(t + j − i)
u(·)∈U,θi ∈Θ
i=0
N
θi = max θti u(t + j − i)
u(·)∈U,θi ∈Θ
i=0
that is, a model without integrated uncertainties. Let us suppose that the
past inputs and outputs and future inputs are zero, thus producing a zero
nominal trajectory. The output of the uncertain system is given by
y(t + 1) = θ(t + 1)
y(t + 2) = a θ(t + 1) + θ(t + 2)
..
.
N −1
y(t + N ) = aj θ(t + N − j)
j=0
The upper bound will grow as |a|(j−1) θ and the lower bound as |a|(j−1) θ. The
band will stabilize for stable systems to a maximum value of θ/(1 − |a|) and
θ/(1 − |a|). This type of model will not incorporate the possible drift in the
process caused by external perturbations.
For the case of integrated uncertainties defined by the following model
θ(t)
y(t + 1) = ay(t) + bu(t) +
k−1
y(t + k) = aj θ(t + k − j)
j=0
and
N
N k−1
y(t + N ) = y(t + j) = aj θ(t + k − j)
k=1 k=1 j=0
indicating that the uncertainty band will grow continuously. The rate of
growth of the uncertainty band stabilizes to θ/(1 − |a|) and θ/(1 − |a|) af-
ter the transient caused by process dynamics.
In order to generate the j step ahead prediction for the output vector, let
us consider the Bezout identity:
I = Ej (z −1 )Ã(z −1 ) + Fj (z −1 )z −j (8.5)
Notice that the prediction will be included in a band around the nominal
prediction ym(t + j) = Fj (z −1 )y(t) + Ej (z −1 )B(z −1 ) u(t + j − 1) delimited
by
y = Gu u + Gθ θ + f (8.7)
N2
Nu
J(N1 , N2 , Nu ) = [ŷ(t + j | t) − w(t + j)]2 + λ[u(t + j − 1)]2 (8.8)
j=N1 j=1
If the prediction Equation (8.7) is used, Equation (8.8) can now be written
as
implies finding the minimum at one vertex of the polytope Θ, the computa-
tion time can be prohibitive for real-time applications with long costing and
control horizons. The problem gets even more complex when the uncertain-
ties on the parameters of the transfer function are considered. The amount of
computation required can be reduced considerably if other types of objective
functions are used, as will be shown in the following sections.
Campo and Morari [47], showed that by using an ∞-∞ type of norm the min-
max problem involved can be reduced to a linear programming problem that
requires less computation and can be solved with standard algorithms. Al-
though the algorithm proposed by Campo and Morari was developed for
processes described by the truncated impulse response, it can easily be ex-
tended to the left matrix fraction descriptions used throughout the text.
The objective function is now described as
Note that this objective function will result in an MPC which minimizes the
maximum error between any of the process outputs and the reference trajec-
tory for the worst situation of the uncertainties; the control effort required to
do so is not taken into account.
By making use of the prediction equation y = Gu u+Gθ θ+f and defining
g(u, θ) = (y − w), the control problem can be expressed as
Define μ∗ (u) as
μ∗ (u) = max max |gi (u, θ)|
θ∈Θ i=1···n×N
min μ
μ,u
!
−μ ≤ gi (u, θ) ≤ μ for i = 1, · · · , n × N
subject to:
yi − wi ≤ gi (u, θ) ≤ yi − wi ∀θ ∈ Θ
min μ
μ,u
subject to
⎫
1μ ≥ Gu u + Gθ θ + f − w ⎪⎪
⎬
1μ ≥ −Gu u − Gθ θ − f + w
∀θ ∈ E
y ≥ Gu u + Gθ θ + f ⎪
⎪
⎭
−y ≥ −Gu u − Gθ θ − f
u≥u
−u ≥ −u
U ≥ T u + 1u(t − 1)
−U ≥ −T u − 1u(t − 1)
with
m×Nu
u−u
x= c = [0, · · · , 0, 1] At = [At1 , · · · , At2N , Atu ] bt = [bt1 , · · · , bt2N , btu ]
t
μ
⎡ ⎤
u−u
bu = ⎣ U + T u + 1u(t − 1) ⎦
−U − T u − 1u(t − 1)
8.2.3 1-norm
N2
n
Nu
m
J(u, θ) = |yi (t+j | t, θ)−wi (t+j)|+λ |ui (t+j −1)| (8.12)
j=N1 i=1 j=1 i=1
where N1 and N2 define the prediction horizon and Nu defines the control
horizon. If a series of μi ≥ 0 and βi ≥ 0 such that for all θ ∈ Θ,
n
Nu
m
μ∗ (u) = max |yi (t + j, θ) − wi (t + j)| + λ | ui (t + j − 1)|
θ∈E
j=1 i=1 j=1 i=1
When the global uncertainty model is used and constraints on the output
variables, the manipulated variables (U, U), and the slew rate of the manip-
ulated variables (u, u) are taken into account, the problem can be interpreted
as an LP problem with:
min γ
γ,μ,β,u
subject to
⎫
μ ≥ Gu u + Gθ θ + f + w ⎪⎪
⎬
μ ≥ −Gu u − Gθ θ − f + w
∀θ ∈ E
y ≥ Gu u + Gθ θ + f ⎪
⎪
⎭
−y ≥ −Gu u − Gθ θ − f
β ≥ u
β ≥ −u
u ≥ u
−u ≥ u
U≥T u + 1u(t − 1)
−U ≥ −T u − 1u(t − 1)
γ ≥ 1t μ + λ1β
The problem can be transformed into the usual form
min ct x subject to Ax ≤ b, x ≥ 0
x
with
⎡ ⎤
u−u m×Nu n×N m×Nu
⎢ μ ⎥
x=⎢
⎣ β ⎦
⎥ c = [0, · · · , 0, 0, · · · , 0, 0, · · · , 0, 1]
t
γ
At = [At1 , · · · , At2N , Atu ] bt = [bt1 , · · · , bt2N , btu ]
where the block matrices take the following form
⎡ ⎤ ⎡ ⎤
Gu −I 0 0 −Gu u − Gθ θi − f + w
⎢ −Gu −I 0 0 ⎥ ⎢ Gu u + G θ θ i + f − w ⎥
Ai = ⎢ ⎥ ⎢
⎣ Gu 0 0 0 ⎦ bi = ⎣ y − Gu u − Gθ θi − f ⎦
⎥
−Gu 0 0 0 −y + Gu u + Gθ θi + f
⎡ ⎤
I 0 0 0 ⎡ ⎤
⎢ I ⎥ u−u
⎢ 0 −I 0 ⎥ ⎢ ⎥
⎢ −I −u
⎢ 0 −I 0 ⎥ ⎥
⎢
⎢
⎥
⎥
⎢ ⎥ ⎢ u ⎥
Au = ⎢ T 0 0 ⎥ bu = ⎢ U − T u − 1u(t − 1) ⎥
⎢ −T 0 0 0⎥ ⎥ ⎢ ⎥
⎢ ⎣ −U + T u + 1u(t − 1) ⎦
⎣ 0 1, · · · , 1 1, · · · , 1 −1 ⎦
0
n×N m×Nu
230 8 Robust Model Predictive Control
and θi is the ith vertex of E. The number of variables involved in the linear
programming problem is 2 × m × Nu + n × N + 1, while the number of con-
straints is 4 × n × N × 2n×N + 5 × m × Nu + 1. As the number of constraints is
much higher than the number of decision variables, solving the dual LP prob-
lem should also be less computationally expensive than the primal problem.
x = Gu u + Gθ θ + fx x(t) (8.14)
Taking the rows corresponding to x(t + N ) and substituting them into the
inequality defining the terminal region
where guN , gθN , and fxN are the last n rows of Gu ,Gθ , and Fx respectively,
with n = dim(x). The left-hand side of Inequalities (8.15) are affine func-
tions of the uncertainty vector θ. Problem (8.13) results in a QP or LP problem
(depending on the type of the objective function) with an infinite number of
constraints. As in the previous cases, because the constraints are affine ex-
pressions of the uncertainties, if the inequalities hold for all extreme points
(vertices) of Θ they also hold for all points inside Θ; that is, the infinite con-
straints can be replaced by a finite number (although normally very high) of
constraints and the problem is solvable. The problem can be expressed as
It is easy to see that if Inequality (8.20) is satisfied, all constraints in (8.19) will
also be satisfied. Furthermore, if any constraints in (8.19) is not satisfied then
constraint (8.20) will not be satisfied. Problem (8.17) can be expressed with a
considerably smaller number of constraints:
232 8 Robust Model Predictive Control
where m is a vector with its j entry equal to minθi ∈ rθj θi . Notice that these
quantities are constant and can be computed offline.
The next example is the frequently found case of uncertainties in the gain.
Consider a second-order system described by the following difference equa-
tion
8.5 Illustrative Examples 233
1.2
1.0
0.8
0.6
Output
0.4
0.2
0.0
-0.2
0.0 20.0 40.0 60.0
Samples
(a)
1.2
1.0
0.8
0.6
Output
0.4
0.2
0.0
-0.2
0.0 20.0 40.0 60.0
Samples
(b)
Fig. 8.1. (a) Output bound violation and (b) output with min-max algorithm
where 0.5 ≤ K ≤ 2. That is, the process static gain can be anything from
half to twice the nominal value. A quadratic norm is used with a weighting
factor of 0.1 for the control increments, a control horizon of 1, and a predic-
tion horizon of 10. The control increments were constrained between −1 and
1. Figure 8.2(a) shows the results obtained by applying a constrained GPC
for three different values of the process gain (nominal, maximum and mini-
mum). As can be seen, the results obtained by the GPC deteriorate when the
gain takes the maximum value giving rise to an oscillatory behaviour.
The results obtained when applying a min-max GPC for the same cases
are shown in Figure 8.2b. The min-max problem was solved in this case by
using a gradient algorithm in the control increments space. For each point
visited in this space the value of K maximizing the objective function had
to be determined. This was done by computing the objective function for
the extreme points of the uncertainty polytope (two points in this case). The
234 8 Robust Model Predictive Control
1.5
1.0
0.5 K = 1
K = 0.5
K = 2
0.0
0.0 20.0 40.0 60.0 80.0 100.0
Samples
(a)
1.5
1.0
0.5 K = 1
K = 0.5
K = 2
0.0
0.0 20.0 40.0 60.0 80.0 100.0
Samples
(b)
Fig. 8.2. (a) Uncertainty in the gain for constrained GPC (b) and min-max GPC
responses of the min-max GPC, which takes into account the worst case, are
acceptable for all situations as can be seen in Figure 8.2(b).
A simulation study was carried out with 600 cases varying the process
gain uniformly in the parameter uncertainty set from the minimum to max-
imum value. The bands limiting the output for the constrained GPC and the
min-max constrained GPC are shown in Figure 8.3. As can be seen, the uncer-
tainty band for the min-max constrained GPC is much smaller than the one
obtained for the constrained GPC.
1.0
0.5
Constrained GPC
Min-max
0.0
0.0 20.0 40.0 60.0 80.0 100.0
Samples
m
F (x) = F0 + xi Fi > 0 (8.23)
i=1
where Fi are symmetrical real n × n matrices, xi are variables and F (x) > 0,
means that F (x) is positive definite. The three main LMI problems are:
1. the feasibility problem: determining variables x1 ,x2 ,...,xm so that In-
equality (8.23) holds. m
2. the linear programming problem: finding the optimum of i=1 ci xi sub-
ject to F (x) > 0.
3. the generalized eigenvalue minimization problem: finding the minimum
λ such that: λA(x) − B(x) > 0, A(x) > 0, B(x) > 0
Many problems can be expressed as LMI problems [35], even inequality
expressions that are not affine in the variables. This is the case of quadratic in-
equalities, frequently used in control, which can be transformed into an LMI
form using Schur complements: Let Q(x), R(x), and S(x) depend affinely on
x and be Q(x), R(x) symmetrical. Then the LMI problem
Q(x) S(x)
>0
S(x)T R(x)
is equivalent to
There are efficient algorithms to solve LMI problems which have been
applied to solve control problems such as robust stability, robust pole place-
ment, optimal LQG, and robust MPC. In this last context, Kothare et al. [109]
proposed a robust constrained model predictive control as follows
Consider the linear time-varying system:
236 8 Robust Model Predictive Control
The problem is solved by finding a linear feedback control law u(k + i|k)
= F x(k + i|k) such that V (x(k|k)) is minimized. Suppose that there are no
constraints in the state and inputs and that the model uncertainties are de-
fined as follows
where Co denotes the convex hull defined by vertices [Ai , Bi ]. That is, any
plant [A, B] ∈ Ω can be expressed as
L
[A, B] = λi [Ai , Bi ]
i=1
L
with λi ≥ 0 and i=1 λi = 1
In these conditions, the robust MPC can be transformed into the following
LMI problem
min γ (8.29)
γ,Q,Y
subject to
1 x(k|k)T
≥0 (8.30)
x(k|k) Q
8.7 Closed-Loop Predictions 237
⎡ 1/2 ⎤
Q QATj + Y T BjT QQ1 Y T R1/2
⎢ Aj Q + Bj Y Q 0 0 ⎥
⎢ ⎥ ≥ 0 , j = 1, · · · , L (8.31)
⎣ Q1/2 Q 0 γI 0 ⎦
1
R1/2 Y 0 0 γI
Once this LMI problem is solved, the feedback gain can be obtained by:
F = Y Q−1
Kothare et al. [109] demonstrated that constraints on the state and ma-
nipulated variables and other types of uncertainties can also be formulated
and solved as LMI problems.
The main drawbacks of this method are:
• Although LMI algorithms are supposed to be numerically efficient, they
are not as efficient as specialized LP or QP algorithms.
• The manipulated variables are computed as a linear feedback of the state
vector satisfying constraints, but when constraints are present, the opti-
mum does not have to be linear.
• Feasibility problems are more difficult to treat as the physical meaning of
constraints is somehow lost when transforming them into the LMI format.
the first player has to make his second move, the moves made at the first
stage by both players are known and a more informed decision can be made.
That is, the problem can be posed as
j
y(t + j) = yn (t + j) + aj−i θ(t + i − 1)
i=1
|y(t)| ≤ 2. Notice that if the uncertainties take one of the extreme values
θ(t + j) = 1 or θ(t + j) = −1 and θ(t + j) = sign(yn (t + j)) is chosen then:
That is, there is no sequence of the control moves that guarantees that process
variables will be within bounds for all possible realizations of uncertainties.
However, if the manipulated variable is chosen as u(t+j) = −ay(t+j)/b,
the prediction equations are now:
Then |y(t + j)| = |θ(t + j − 1)| ≤ 1 ≤ 2; that is, the constraints are fulfilled
with this simple control law for all possible values of the uncertainties. The
difference is that u(t + j) is now computed with θ(t), . . . θ(t + j − 1) known
while in the previous case, u(t + j) was computed with no knowledge of
θ(t) . . . θ(t + j − 1).
The previous example has shown that the traditional (open-loop) prediction
strategy used in min-max MPC results in an infeasible problem. The reason
is that a single control profile cannot handle all possible future uncertainties.
The example also shows that a simple linear controller can find a feasible so-
lution to the problem by using feedback. This is the key issue: the open-loop
MPC tries to find a solution to the control problem (u(t), u(t + 1), · · · , u(t +
N − 1)) with the information available at sampling time t but the reality is
that because of the receding control strategy, at time t + 1 the information
about the process state (and therefore the uncertainties) at time t + 1 will be
available. By using the open-loop prediction, the future control moves are
computed as
that is, u(t + j) is computed as a function of the state at time t, while in the
second case, a control law is given (u(t + j) = −ay(t + j)/b, in the example)
by a function of the state at t + j:
In an MPC framework, this will translate into optimizing over the possi-
ble control laws: the decision variables now are not u(t+j) but all the possible
functions gt+j (x(t + j)). The optimizer will have to search in the space of all
possible functions of x(t + j). This is a much harder problem to solve.
Another approach to the problem is to consider different variables for
each possible realization of the perturbations (uncertainties) as proposed in
[189]. Suppose that the realization of the perturbation and uncertainties are
known. This would be the ideal situation from a control point of view: no un-
certainty in the model or disturbances. The process could be controlled in an
open loop manner applying a previously computed control law optimizing
some operational criteria. Suppose that we compute the optimum for every
possible realization of the perturbations. We would have for each particular
realization of the perturbations, the initial state and the possible realization
of the reference (notice that if no future references are known they can be
considered uncertainties)
[u(t), . . . , u(t + N − 1)] = f (x(t), θ(t + 1), . . . , θ(t + N ), r(t + 1), · · · , r(t + N ))
(8.35)
Notice that u(t) can be different for each realization of the uncertainties.
However, we would like to have a u(t) which depends only on state x(t). If
this u(t) is applied to the process, the next possible states will be given by
x+ +
1 = f (x0 , u0 , θ )
x− −
1 = f (x0 , u0 , θ )
In either of these two states we would like to apply a control law which
depends only on the state. That is, we have two more variables, u+ −
1 and u1 ,
associated to each possible realization of the uncertainty. We now have the
following set of possible states for the next time instant:
x++
2 = f (x+ + +
1 , u1 , θ )
x+−
2 = f (x+ + −
1 , u1 , θ )
x2 = f (x1 , u−
−+ − +
1 ,θ )
x−−
2 = f (x− − −
1 , u1 , θ )
We can now associate the following decision variables to the next step:
u++ +− −+ −−
2 , u2 , u2 , u2 . If the process uncertainties can take only two possible
8.7 Closed-Loop Predictions 241
values (or when these two values are the only two values which are rele-
vant to the max problem), the number of decision variables added at each
sampling instant j is 2j . In general, at each sampling instant, the number of
decision variables added is mj , where m is the number of possible uncertain-
ties to be considered at sampling time j. The number of decision variables for
N
the min problem is j=1 mj .
In a multivariable case with four states and only one uncertainty param-
eter for each state and two possible values of interest for each uncertainty
parameter, the number of possible realizations of the uncertainties at the ex-
treme points is m = 16. In this case, if the control horizon is N = 10, the num-
ber of decision variables for the minimization problem would be 7.3 × 1010 .
By using causality arguments the number of decision variables decreases but
the problem gets more complex because additional constraints have to be
added. The method is regarded as impractical except for very small prob-
lems.
N −1
Jt (x(t), u, θ) L(x(t + j), u(t + j)) + F (x(t + N ))
j=0
with J t+N −1 (x(t + N − 1), u(t + N − 1)) maxθ(t+N −1) L(x(t + N − 1),
u(t + N − 1)) + F (x(t + N )).
Notice that F (x(t + N )) measures the merit of the last position, and this
is the way to avoid entering an infinite loop.
Suppose we are able to solve Problem (8.38) explicitly, i.e., determining
∗
Jt+N −1 (x(t+N −1)) as a function of x(t+N −1). This is the cost of going from
x(t + N − 1) to the end. At the next stage we would encounter the following
problem:
242 8 Robust Model Predictive Control
∗
Jt+N −2 (x(t + N − 2)) min J t+N −2 (x(t + N − 2), u(t + N − 2)) (8.39)
u(t+N −2)
with
J t+N −2 (x(t + N − 2), u(t + N − 2)) max L(x(t + N − 2), u(t + N − 2))
θ(t+N −2)
∗
+Jt+N −1 (f (x(t + N − 2), u(t + N − 2), θ(t + N − 2)))
with
∗
J t (x(t), u(t)) max L(x(t), u(t)) + Jt+1 (f (x(t), u(t), θ(t)))
θ(t)
The closed-loop min-max MPC control move u∗ (t) for a particular value
of x(t) is the minimum of J t (x(t), u(t)). The key factor in Dynamic Program-
∗
ming is finding the functions Jt+j (x(t + j)). If we include constraints in the
min-max MPC, each of the steps taken earlier can be described as
Notice that constraints are not taken into account in the optimization
Problem (8.42) because keeping the process within constraints is the mis-
sion of the input variables, while the object of uncertainties, in this game,
is to maximize the problem regardless of constraint fulfillment, as indi-
cated in [21]. Here it has also been demonstrated that if the system is linear
x(t + 1) = A(ω(t))x(t) + B(ω(t))u(t) + Ev(t), with the uncertainty vector
θ(t)T = [ω(t)T v(t)T ], and the stage cost of the objective function defined as:
L(x(t + j), u(t + j)) Qx(t + j)p + Ru(t + j)p with the terminal cost
∗
defined as Jt+N (x(t + N )) Px(t + N )p , the solution is a piecewise affine
function of the state. This will be seen in Chapter 11.
Another way of solving the problem is to approximate functions Jt∗ (x(t+
j)) in a grid over the state as suggested in [117]. The idea is to impose a grid
∗
on the state space and then to compute Jt+N −1 (x(t + N − 1)) for all points
∗
in that grid. At the next stage function Jt+N −2 (x(t + N − 2)) is computed for
8.7 Closed-Loop Predictions 243
w v u x y
min-max
MPC Plant C
+
-
∗
the points in the grid using an interpolation of Jt+N −1 (x(t + N − 1)) when
x(t+N −1) = f (x(t+N −2), u(t+N −2), θ(t+N −2)) does not coincide with
one of the points in the grid. The main drawback of this method is that only
problems with a small dimension of the state space can be implemented as
dim(x)
the number of one-stage min-max problems to be solved will be N × NG
k−1
k−1
x(t + k) = Ak x(t) + Ak−1−j Bu(t + j) + Ak−1−j ϑ(t + j) (8.46)
j=0 j=0
k−1
k−1
x(t + k) = AK k x(t) + AK k−1−j Bv(t + j) + AK k−1−j ϑ(t + j)
j=0 j=0
The first two terms of the right-hand side of Expressions (8.46) correspond
to the nominal trajectory (i.e., when the uncertainties are zero). The error
caused by uncertainties in the open-loop (x̃o (t+k)) and closed-loop (x̃c (t+k))
structures are given by:
k−1
x̃o (t + k) = Ak−1−j ϑ(t + j)
j=0
k−1
x̃c (t + k) = AK k−1−j ϑ(t + j)
j=0
k−1
x̃o (t + k)p ≤ Ak−1−j p
j=0
k−1
x̃c (t + k)p = AK k−1−j p
j=0
Notice that if the feedback gain is chosen such that AK p < Ap , the uncer-
tainty bounds for predictions of the closed-loop systems will also be smaller
than the corresponding bounds for the open loop.
Another interpretation of this is that by introducing a stabilizing regu-
lator in a cascade fashion we have reduced the reaction of the closed-loop
system to the uncertainties in the prediction. The effect of this controller can
be seen as a reduction of the Lipschitz constant of the system. The Lipschitz
constant is a gauge of the effect of the uncertainty on the prediction of the
state at the next sample time. As shown in [125], the discrepancy between
the nominal predicted trajectory and the uncertain evolution of the system is
reduced if the Lipschitz constant is lower. Consequently, the predictions, and
therefore the obtained MPC controller, are less conservative than the open-
loop ones.
Notice that if there are some constraints on u(t), these constraints have
to be translated into the new manipulated variables v(t). Let us consider that
the original problem constraints were expressed by:
8.7 Closed-Loop Predictions 245
Ru u + Rϑ ϑ ≤ r + Rx x(t) (8.47)
The manipulated variable is computed as: u(t + k) = −Kx(t + k | t)
+v(t+k). The manipulated variable vector u for the complete control horizon
can be expressed as
u = Mx x(t) + (I + Mv )v + Mϑ ϑ (8.48)
with
⎡ ⎤ ⎡ ⎤
u(t) K
⎢ u(t + 1) ⎥ ⎢ KA∗ ⎥
⎢ ⎥ ⎢ ⎥
u = ⎢. ⎥ Mx = − ⎢ .. ⎥
⎣ .. ⎦ ⎣. ⎦
u(t + N − 1) KA∗ N −1
⎡ ⎤ ⎡ ⎤
0 0 ··· 0 v(t)
⎢ KB 0 ··· 0⎥ ⎢ v(t + 1) ⎥
⎢ ⎥ ⎢ ⎥
Mv = − ⎢ .. .. .. .. ⎥ v = ⎢. ⎥
⎣. . . .⎦ ⎣ .. ⎦
KA∗ N −2 B KA∗ N −3 B · · · 0 v(t + N − 1)
⎡ ⎤ ⎡ ⎤
0 0 ··· 0 ϑ(t)
⎢K 0 · · · 0 ⎥ ⎢ ϑ(t + 1) ⎥
⎢ ⎥ ⎢ ⎥
Mϑ = − ⎢ .. .. . . .. ⎥ ϑ = ⎢. ⎥
⎣. . . .⎦ ⎣ .. ⎦
KA∗ N −2 KA∗ N −3 · · · 0 ϑ(t + N − 1)
Ru (I + Mv )v + (Ru Mϑ + Rϑ )ϑ ≤ r + (Rx − Ru Mx )x
x = Gx x(t) + Gu u + Gϑ ϑ (8.50)
It can easily be seen that if x(t) = 0, for any control sequence, the error due to
the uncertainty at t+3 is given by: x̃(t+3) = 0.9025ϑ(t)+0.95ϑ(t+1)+ϑ(t+2).
By making ϑ(t) = ϑ(t+1) = ϑ(t+2) = ϑ, or ϑ(t) = ϑ(t+1) = ϑ(t+2) = ϑ, we
can see that by using these uncertainty values, the error band can be made
as big as 2.8525ϑ = 1.4263 and 2.8525ϑ = −1.4263. That is, if the nominal
trajectory makes x̂(t + 3) ≥ 0, just by choosing the uncertainties to be ϑ, the
state vector will be higher than the admissible value (x(t) ≤ 1.2). The same
situation happens when x̂(t + 3) ≤ 0, where choosing the uncertainties to be
ϑ will cause the state vector to have a lower value than allowed. That is, the
problem is not feasible for any point in the state space.
Now suppose that the following linear feedback is considered:
u(t) = −8.5x(t) + v(t). The resulting system dynamics are now described by:
x(t + 1) = 0.95x(t) + 0.1(−8.5x(t) + v(t)) + ϑ(t) = 0.1x(t) + 0.1v(t) + ϑ(t).
The error due to uncertainties can be computed as:
⎡ ⎤ ⎡ ⎤⎡ ⎤
x̃(t + 1) 1 0 0 ϑ(t)
⎣ x̃(t + 2) ⎦ = ⎣ 0.1 1 0 ⎦ ⎣ ϑ(t + 1) ⎦ (8.54)
x̃(t + 3) 0.01 0.1 1 ϑ(t + 2)
problem is feasible for any initial state, such that a nominal trajectory can be
computed separated from the bounds by 0.5, 0.55, and 0.555. That is, −0.7 ≤
x̂(t + 1) ≤ 0.7, −0.65 ≤ x̂(t + 2) ≤ 0.65, and −0.645 ≤ x̂(t + 3) ≤ 0.645. In
this case, a feasible solution exists for all x(t) ∈ [−1.2, 1.2].
In conclusion, when open-loop MPC was used, no feasible solution could
be found such that constraints would be fulfilled in spite of future uncertain-
ties or perturbations. When we consider that information about the future
process state will be taken into account (by a linear feedback in this exam-
ple), the problem is feasible for any admissible value of x(t).
8.8 Exercises
8.1. Consider the second-order system described by the following equation
y(t + 1) = y(t) − 0.09y(t − 1) + 0.09u(t) + ε(t)
with −1 ≤ u(t) ≤ 1, −1 ≤ y(t) ≤ 1, 0.01 ≤ ε(t) ≤ 0.01. The system is
modelled by the following first-order model:
y(t + 1) = ay(t) + bu(t) + θ(t)
1. If the model parameters are chosen as a = 0.9, b = 0.1, determine a
bound θ for the uncertainty such that −θ ≤ θ(t) ≤ θ.
2. Explain how you would find θ experimentally if you did not know the
equations of the real system but you could experiment on the real system
itself.
3. Find a bound for the output prediction trajectory; i.e., y(t + j) such that
|y(t + j|t)| ≤ y(t + j).
4. Explain how you would calculate a and b so that the uncertainty bound
θ is minimized. Find the minimizing model parameters a and b and the
minimal bound θ.
5. Find a bound for the output prediction trajectory with these new bounds
and model. Compare the results with those obtained in number 3.
6. Formulate a min-max MPC using different types of objective functions
(quadratic, 1-norm, ∞-norm).
7. Solve the min-max MPC problems of number 6 and simulate the re-
sponses with different control horizons.
8.2. Given the system y(t + 1) = ay(t) + bu(t) + θ(t) with a = 0.9, b = 0.1,
−1 ≤ u(t) ≤ 1, −1 ≤ y(t) ≤ 1, the uncertainty θ(t) bounded by: −0.05 ≤
θ(t) ≤ 0.05 and a terminal region defined by the following box around the
origin −0.1 ≤ y(t + N ) ≤ 0.1.
1. Formulate a robust min-max MPC with N = 3 that takes the system to the
terminal region for any realization of the uncertainties with different ob-
jective functions (quadratic, 1-norm, ∞-norm). Comment on the nature
and difficulties of the optimization problems encountered.
248 8 Robust Model Predictive Control
2. Repeat the exercise of number 1 for a robust MPC but minimizing the
objective function for the nominal system instead of the min-max MPC.
3. Solve the problems of numbers 1 and 2 for N = 1, N = 2 and N = 3.
Discuss the feasibility of each.
4. Formulate the problems of numbers 1 and 2 for N = 3 but use a linear
feedback as indicated in Section 8.7.4. Discuss the feasibility.
with
11 0
A= , B= , D = 1 0 , −0.1 ≤ θ(t) ≤ 0.1, −1 ≤ u(t) ≤ 1
01 1
The control objective consists of taking (and maintaining) the state vector as
close to zero as possible by solving the following min-max problem.
N
min max x(t + j)T x(t + j) + 10 u(t + j − 1)2
u∈[−1,1] θ∈[−0.1,0.1]
j=1