Partially Supported by NSERC (Canada) and FQRNT (Qu Ebec)
Partially Supported by NSERC (Canada) and FQRNT (Qu Ebec)
1. INTRODUCTION
For many nonlinear processes, linear controllers
can perform quite adequately. However, nonlinear
control can be justified when the plant behavior is
highly nonlinear and subject to large and frequent
disturbances or when the operating points span
a wide range of nonlinear dynamics (Qin and
Badgwell, 1997). Furthermore, nonlinear predictive control has to be considered as a solution
if safety and actuator constraints exist, which is
always the case for real processes.
meaning and a priori information is almost completely neglected. Furthermore, unlike fundamental models, they represent adequately the process
only for conditions (operating points, types of
inputs, etc.) similar to those found in the recorded
data. In fact, if the underlying assumptions of
the fundamental models are respected, they can
mimic behaviors outside the range of calibration
and less data are required for their development.
As a result, fundamental models have been used
for nonlinear model predictive control (MPC).
However, the considered plants are almost always
a single unit operation with a relatively simple
dynamic model (Henson, 1998). Writing fundamental models is a difficult task but commercial
dynamic simulators are now available. However,
using commercial dynamic simulators for nonlinear predictive control (NLPC) does not seem to
have been reported yet in the literature (Henson,
1998). The main reason for not using complex
fundamental nonlinear models for designing predictive controllers is certainly that the complexity
of the on-line solution of the nonlinear programming problem increases with the one of the model,
hence leading to computational and reliability difficulties. Another reason why commercial simulators are not used is probably the unavailability
of the model equations to the control designer
(Henson, 1998).
In recent years, many research works have focused
on the nominal stability problem for nonlinear
model predictive control. Most proposed solutions
consist in insuring nominal stability by imposing
penalties or constraints on the terminal state of
the prediction horizon (Qin and Badgwell, 1997;
Mayne et al., 2000). These solutions are usually
computationally quite demanding. Fortunately,
algorithms to reduce the computational effort are
now appearing in the literature (Fontes, 2001;
Magni et al., 2001), but they still remain relatively
difficult to implement. However, most nonlinear
model predictive controllers do not use terminal
state constraints of any kind (Qin and Badgwell,
1997). They instead allow to set the prediction
horizon long enough to go beyond the steady-state
hence approximating the infinite horizon solution,
which leads to nominal stability (Meadows et al.,
1995). Therefore, the proposed controller is presented in its simplest form, without relying on
terminal state constraints.
The proposed scheme transforms the optimization
of the cost function required to solve MPC problems into a control problem. Thus at each sampling time, instead of solving a complex nonlinear
programming problem (NLP) to obtain the control action, a simple closed-loop simulation of the
process with a pure integrator controller is conducted. The resulting optimization-by-simulation
(OBS) does not require a NLP solver and is therefore very easy to implement.
Two examples are given to illustrate the method
efficiency. The first one shows its use in a constrained multivariable case with an application
for the linear control of a grinding circuit. The
second one presents the nonlinear control of the
cooling zone of an induration furnace where a
phenomenological simulator is used as the process
model.
2. OPTIMIZATION-FREE CONSTRAINED
NONLINEAR PREDICTIVE CONTROL
2.1 Notation
The process inputs and outputs at time t = k
are respectively u(k) <n and y(k) <n (all
vectors in the paper are columns). The set points
are r(k) <n . The best plant model MN y , possibly based on phenomenological relationships and
therefore probably highly complex and nonlinear,
is described by
xN y (k + 1) = fy (xN y (k), u(k))
(1)
(2)
(3)
(4)
(5)
(6)
Deterministic
predictor
^
R(1:H)
+
u(k)
Min J
^
E(1:H)
j
y(k)
Process
yN(k) +
GT
Y^N(1:H)j
MNy
-
r(k)
Y^S(1:H)
Stochastic
predictor
GR
^
e(k+H/k)j
K(z)
U(0:H-1)j
MNy
^
R(1:H)
- Y^S(1:H)
yS(k)
u(k)j
(1:H)
(H)
u(H)
U(0:H-1)
xNy(k)
(9)
(10)
(11)
(12)
To achieve the above objective, a new predictive control scheme is proposed. The control is
calculated by repeating the following procedure
at every sampling time. The control structure
(Figure 1) is similar to the GlobPC presented
by Desbiens et al. (2000) (with the simplification
that the tracking and regulation controllers are
identical).
Step 1 : Measure the process outputs y(k).
Step 2 : Estimate the process disturbance with
the IMC structure
yS (k) = y(k) yN (k)
(13)
2.3 Optimization-by-simulation
To better understand the OBS approach, the
constraints (11) and (12) will first be neglected.
Knowing the actual states of MN y , an integrator
controller iteratively calculate different manipulated variables trying to bring ybN (k + H/k) equal
to rb(k + H/k) ybS (k + H/k), hence minimizing
the cost function (9). More precisely, every time
the cost function must be minimized (at every
sampling time k), the system depicted in Figure 2
is simulated until it converges. Each discrete step
in the simulation (denoted with the subscript j)
is equivalent to an optimization step for usual
optimization algorithms. The integrator controller
is
K
1
0
.
.
.
0
1 z 1
K2
0
...
0
(14)
1
K(z) =
1z
...
...
...
...
Kn
0
0
...
1 z 1
b : H) eb(H) extracts the predicThe block E(1
b : H)j , since
tions eb(k + H/k)j from its input E(1
only the former appears in the cost function and
therefore must be integrated. The block u(H)
U (0 : H 1) builds the complete vector U (0 : H
1)j from u(k)j by respecting the control horizon
constraint (10). At every simulation step j, the
states of the model MN y are reset to their actual
values xN y (k) before calculating YbN (1 : H)j for
the sequence of inputs U (0 : H 1)j using (2)
and (1); indeed the objective is to find the manipulated variables that bring ybN (k + H/k)j to the
set point from its actual (at time k) state xN y (k).
The value of u(k)j when steady-state is reached is
the result of the cost function minimization, i.e. it
corresponds to u(k) that must be applied to the
plant. For linear systems, it would be easy to show
that the solution is optimal since the integrator
controller insures that the outputs reach the set
points.
^
R(1:H)
-Y^S(1:H)
+
u(k)j umax
wmax
+
+
-
+
wmin
xNy(k)
(1:H)
(H)
MNy
u(H)
U(0:H-1)
K(z)
umin
Rod Mill
Ball Mill
y1: Circulating
load (t/h)
Y^N(1:H)j
^ (1:H)
W
N
j
MNw
xNw(k)
F xF = O xO + U xU
(16)
u2: WF [m3/h]
150
145
140
850
800
Outputs
Set points
750
80
155
y1: CL [t/h]
60
40
20
0
Outputs
Constraints
time [s]
100
80
60
50
45
40
55
50
45
0
x 10
time [s]
3
4
x 10
yN (s)=
13,8
(1+5700s)(1+400s)
4,2(1700s)
(1+5000s)(1+5s)
0,2(1900s)e600s
(1+5200s)(1+750s)
0,012(1+39500s)
(1+4400s)(1+50s)
u(s) (17)
u(s) (18)
The second example shows how an optimizationfree predictive controller may be efficient when
based on a phenomenological simulator.
The same models are used for the plant simulation (no process-model mismatch). The sampling
period Ts is 200 s and the prediction horizon is
H = 12. The stochastic model is given by
1 0, 8z 1
I (k)
(19)
yS (k) =
1 z 1
5,749
(1+5500s)(1+210s)
1,962
(1+4700s)
0,0255(15600s)
(1+5300s)(1+750s)
0,14(1+4050s)
(1+3200s)(1+60s)
wN (s)=
= 0, 5E 4
75
74
73
72
71
0
50
100
76
74
72
70
68
0
50
100
1348
100
76
50
1347
1346
50
Optimal
Optimfree
Set point
100
150
0
50
Optimal
Optimfree
Set point
1345
100
1344
0
time [s]
50
100
time [s]
REFERENCES
Plamondon.
A. Desbiens, D. Hodouin, and E.
Global predictive control: A unified control
structure for decoupling setpoint tracking, feedforward compensation and disturbance rejection dynamics. IEE Proceedings on Control
Theory and Applications, 147:465475, 2000.
F.A.C.C. Fontes. A general framework to design stabilizing nonlinear model predictive controllers. Systems & Control Letters, 42:127143,
2001.
M.A. Henson. Nonlinear model predictive control:
Current status and future directions. Computers & Chemical Engineering, 23:187202, 1998.
R. Lestage, A. Pomerleau, and A. Desbiens. Improved constrained cascade control for parallel
processes. Control Engineering Practice, 7:969
974, 1999.
L. Magni, G. De Nicolao, and L. Magnani. A
stabilizing model-based predictive control algorithm for nonlinear systems. Automatica, 37:
13511362, 2001.
D.Q. Mayne, J.B. Rawlings, and C.V. Rao. Constrained model predictive control: Stability and
optimality. Automatica, 36:789814, 2000.
E.S. Meadows, M.A. Henson, J.W. Eaton, and
J.B. Rawlings. Receding horizon control and
discontinuous state feedback stabilization. International Journal of Control, 62:12171229,
1995.
Poulin. PerforD. Pomerleau, D. Hodouin, and E.
mance analysis of a dynamic phenomenological
controller for a pellet cooling process. Journal
of Process Control, 13:137151, 2003.
S.J. Qin and T.A. Badgwell. An overview of nonlinear model predictive control applications. In
J.C. Kantor, C.E. Garcia, and B. Carnahan, editors, Fifth International Conference on Chemical Process Control, volume 93, pages 232256,
Englewood Cliffs, 1997. AIChE and CACHE.
T. Soderstrom and P. Stoica. System identification, chapter 1. Prentice Hall, New Jersey, 1988.