Paper Overview Nonlinear MPC Applications
Paper Overview Nonlinear MPC Applications
trol Applications
S. Joe Qin and Thomas A. Badgwell
Abstract. This paper provides an overview of nonlinear model predictive con-
trol (NMPC) applications in industry, focusing primarily on recent applica-
tions reported by NMPC vendors. A brief summary of NMPC theory is pre-
sented to highlight issues pertinent to NMPC applications. Five industrial
NMPC implementations are then discussed with reference to modeling, con-
trol, optimization, and implementation issues. Results from several industrial
applications are presented to illustrate the benets possible with NMPC tech-
nology. A discussion of future needs in NMPC theory and practice is provided
to conclude the paper.
1. Introduction
The term Model Predictive Control (MPC) describes a class of computer control
algorithms that control the future behavior of a plant through the use of an ex-
plicit process model. At each control interval the MPC algorithm computes an
open-loop sequence of manipulated variable adjustments in order to optimize fu-
ture plant behavior. The rst input in the optimal sequence is injected into the
plant, and the entire optimization is repeated at subsequent control intervals. MPC
technology was originally developed for power plant and petroleum renery appli-
cations, but can now be found in a wide variety of manufacturing environments
including chemicals, food processing, automotive, aerospace, metallurgy, and pulp
and paper. Theoretical and practical issues associated with MPC technology are
summarized in several recent review articles. Qin and Badgwell (1997) present a
brief history of MPC technology and a survey of industrial applications in [25].
Meadows and Rawlings summarize theoretical properties of MPC algorithms in
[16]. Morari and Lee discuss the past, present, and future of MPC technology in
[18].
The success of MPC technology as a process control paradigm can be at-
tributed to three important factors. First and foremost is the incorporation of an
explicit process model into the control calculation. This allows the controller, in
principle, to deal directly with all signicant features of the process dynamics.
Secondly the MPC algorithm considers plant behavior over a future horizon in
time. This means that the eects of feedforward and feedback disturbances can be
2 S. Joe Qin and Thomas A. Badgwell
anticipated and removed, allowing the controller to drive the plant more closely
along a desired future trajectory. Finally the MPC controller considers process
input, state and output constraints directly in the control calculation. This means
that constraint violations are far less likely, resulting in tighter control at the opti-
mal constrained steady-state for the process. It is the inclusion of constraints that
most clearly distinguishes MPC from other process control paradigms.
Though manufacturing processes are inherently nonlinear, the vast major-
ity of MPC applications to date are based on linear dynamic models, the most
common being step and impulse response models derived from the convolution
integral. There are several potential reasons for this. Linear empirical models can
be identied in a straightforward manner from process test data. In addition, most
applications to date have been in renery processing [25], where the goal is largely
to maintain the process at a desired steady-state (regulator problem), rather than
moving rapidly from one operating point to another (servo problem). A carefully
identied linear model is suciently accurate in the neighborhood of a single op-
erating for such applications, especially if high quality feedback measurements are
available. Finally, by using a linear model and a quadratic objective, the nominal
MPC algorithm takes the form of a highly structured convex Quadratic Program
(QP), for which reliable solution algorithms and software can easily be found [31].
This is important because the solution algorithm must converge reliably to the
optimum in no more than a few tens of seconds to be useful in manufacturing
applications. For these reasons, in many cases a linear model will provide the
majority of the benets possible with MPC technology.
Nevertheless, there are cases where nonlinear eects are signicant enough to
justify the use of NMPC technology. These include at least two broad categories
of applications:
Regulator control problems where the process is highly nonlinear and sub-
ject to large frequent disturbances (pH control, etc.)
Servo control problems where the operating points change frequently and
span a suciently wide range of nonlinear process dynamics (polymer
manufacturing, ammonia synthesis, etc.).
It is interesting to note that some of the very rst MPC papers describe
ways to address nonlinear process behavior while still retaining a linear dynamic
model in the control algorithm. Richalet et al. [28], for example, describe how
nonlinear behavior due to load changes in a steam power plant application was
handled by executing their Identication and Command (IDCOM) algorithm at
a variable frequency. Prett and Gillette [24] describe applying a Dynamic Matrix
Control (DMC) algorithm to control a
uid catalytic cracking unit. Model gains
were obtained at each control iteration by perturbing a detailed nonlinear steady-
state model.
While theoretical aspects of NMPC algorithms have been discussed quite ef-
fectively in several recent publications (see, for example, [14] and [16]), descriptions
of industrial NMPC applications are much more dicult to nd. A rare exception
An Overview of Nonlinear Model Predictive Control Applications 3
Petrochemical
Chemicals
Polymers
Gas plants
Process nonlinearity
can be found in the paper by Ogunnaike and Wright presented at the CPC-V
conference [20]. This is probably due to the fact that industrial activity in NMPC
applications has only begun to take o in the last few years.
In a previous survey of MPC technology [25], over 2200 commercial applica-
tions were discovered. However, almost all of these were implemented with linear
models and were clustered in renery and petrochemical processes. In preparing
this paper the authors found a sizable number of NMPC applications in areas where
MPC has not traditionally been applied. Figure 1 shows a rough distribution of
the number of MPC applications versus the degree of process nonlinearity. MPC
technology has not yet penetrated deeply into areas where process nonlinearities
are strong and market demands require frequent changes in operating conditions.
It is these areas that provide the greatest opportunity for NMPC applications.
The primary purpose of this paper is to provide a snapshot of the current
state-of-the-art in NMPC applications. A brief summary of NMPC theory is pre-
sented to highlight what is known about closed-loop properties and to emphasize
issues pertinent to NMPC applications. Then several industrial NMPC implemen-
tations are discussed in terms of modeling, control, optimization, and implemen-
tation issues. A few illustrative industrial applications are then discussed in detail.
The paper concludes with a discussion of future needs and trends in NMPC theory
and applications.
4 S. Joe Qin and Thomas A. Badgwell
j =0 k+j Rj T (5)
subject to a model constraint:
xk+j = f (xk+j?1 ; uk+j?1 ) 8 j = 1; P
yk+j = g (xk+j ) + bk 8 j = 1; P
An Overview of Nonlinear Model Predictive Control Applications 5
The rst solution, proposed by Keerthi and Gilbert [9], involves adding a terminal
state constraint to the NMPC algorithm of the form:
xk+P = xs (7)
With such a constraint enforced, the objective function for the controller (Eq.
5) becomes a Lyapunov function for the closed loop system, leading to nominal
stability. Unfortunately such a constraint may be quite dicult to satisfy in real
time; exact satisfaction requires an innite number of iterations for the numerical
solution code. This motivated Michalska and Mayne [17] to seek a less stringent
stability requirement. Their main idea is to dene a neighborhood W around the
desired steady-state xs within which the system can be steered to xs by a constant
linear feedback controller. They add to the NMPC algorithm a constraint of the
form:
(xk+P ? xs ) 2 W (8)
If the current state xk lies outside this region then the NMPC algorithm described
above is solved with constraint 8. Once inside the region W the control switches
to the previously determined constant linear feedback controller. Michalska and
Mayne describe this as a dual-mode controller.
A third solution to the nominal stability problem, described by Meadows et
al. [15], involves setting the prediction horizon and control horizons to innity
P; M ! 1. For this case the objective function in Eq. 5 also serves as a suitable
Lyapunov function, leading to nominal stability. They demonstrate that if the
initial NMPC calculation has a feasible solution, then a feasible solution exists at
each subsequent time step.
These theoretical results provide a foundation upon which to build an imple-
mentable NMPC controller. The challenge of the industrial practitioner is to take
these ideas to the marketplace, which means that a number of additional practical
issues must be confronted. Among other things, one must choose an appropriate
model form, decide how best to identify or derive the model, and develop a reli-
able numerical solution method. The following section describes how ve NMPC
vendors have addressed these issues.
Feedback
Dynamic Optimization
3.1.1. State-Space Models Because step response and impulse response mod-
els are non-parsimonic, a class of state-space model is adopted in the AspenTargetTM 1
product by Aspen Technology, which has a linear dynamic state equation and a
1 This product was formerly known as NeuCOP II.
An Overview of Nonlinear Model Predictive Control Applications 9
of this modeling algorithm is the use of extended Kalman lters (EKF) to cor-
rect for model-plant mismatch and unmeasured disturbances (Zhao, et al. 1998).
The EKF provides a bias and gain correction to the model on-line. This function
replaces the constant output error feedback scheme typically employed in MPC
practice.
A novel feature of the identication algorithm is that the dynamic model
is built with lters and the lter states are used to predict the output variables.
Due to the simplistic lter structure, each input variable has its own set of state
variables, making the A matrix block-diagonal. This treatment assumes that each
state variable is only aected by one input variable, i.e., the inputs are decoupled.
For the typical case where input variables are coupled, the algorithm could generate
state variables that are linearly dependent or collinear. In other words, the resulting
state vector would not be a minimal realization. Nevertheless, the use of PLS
algorithm makes the estimation of the C matrix well-conditioned.
The iteration between the estimation of A; B and C matrices will likely
eliminate the initial error in estimating the process time constants. A theoretical
issue here is whether the iterations are guaranteed to converge.
Process nonlinearity is added to the model with concern for model validity
using the model condence index. When the model is used for extrapolation, only
the linear portion of the model is used. The use of EKF for output error feedback
is interesting; the benet of this treatment is yet to be demonstrated.
3.1.2. Input-Output Models The MVC algorithm from Continental Controls
and the Process Perfecter from Pavilion Technologies use input-output models. To
simplify the system identication task, both products use a static nonlinear model
superimposed upon a linear dynamic model.
Martin, et al. (1998) describe the details of the Process Perfecter modeling
approach. Their presentation is in single-input-single-output form, but the concept
is applicable to multi-input-multi-output models. It is assumed that the process
input and output can be decomposed into a steady-state portion which obeys a
nonlinear static model and a deviation portion that follows a dynamic model. For
any input uk and output yk , the deviation variables are calculated as follows,
uk = uk ? us (12)
yk = yk ? ys (13)
where us and ys are the steady-state values for the input and output, respectively,
and follow a rather general nonlinear relation:
ys = hs (us ) (14)
The deviation variables follow a general linear dynamic relation:
yk =
Xn a y
i k?i + bi uk?i (15)
i=1
An Overview of Nonlinear Model Predictive Control Applications 11
The identication of the linear dynamic model is based on plant test data from
pulse tests, while the nonlinear static model is a neural network built from histori-
cal data. It is believed that the historical data contain rich steady-state information
and plant test is needed only for the dynamic sub-model. Since the dynamic linear
model has a xed gain which is very likely to be dierent from the corresponding
local gain of the static nonlinear model, the gain of the linear sub-model is scaled
to be equal to the nonlinear local gain at the current input, i.e.,
dys j
Ks = du (16)
u
s k
The coecients fbi ; i = 1; 2; ; ng in Eq. 15 are rescaled to achieve a gain of
Ks (uk ).
The use of the composite model in the control step can be described as fol-
lows. Based on the desired output target ysd, a nonlinear optimization program
calculates the best input and output values ufs and ysf using the nonlinear static
model. During the dynamic controller calculation, the nonlinear static gain is ap-
proximated by a linear interpolation of the initial and nal steady-state gains,
f i
Ks (uk ) = Ksi + Ksf ? Kis uk (17)
us ? u s
where uis and ufs are the current and the next steady-state values for the input,
respectively, and
dys j i
Ksi = du (18)
u
s s
dys j f
Ksf = du (19)
us
s
which are evaluated using the static nonlinear model. Substituting the approximate
gain Eq. 17 into the linear sub-model yields,
Xn a y
yk = i k?i + bi uk?i + gi u2k?i (20)
i=1
where Pn
bi = bi Ks (1P?n j=1 aj )
i
(21)
j =1 bj
P
b (1 ? n aj ) Ksf ? Ksi
gi = i n j=1 P
j =1 bj ufs ? uis
(22)
The purpose of this approximation is to reduce computational complexity during
the control calculation.
It can be seen that the steady-state target values are calculated from a non-
linear static model, whereas the dynamic control moves are calculated based on
the quadratic model in Eq. 20. However, the quadratic model coecients (i.e., the
local gain) change from one control execution to the next, simply because they
12 S. Joe Qin and Thomas A. Badgwell
are rescaled to match the local gain of the static nonlinear model. This approxi-
mation strategy can be interpreted as a successive linearization at the initial and
nal states followed by a linear interpolation of the linearized gains. The interpo-
lation strategy resembles gain-scheduling, but the overall model is dierent from
gain scheduling because of the gain re-scaling. This model makes the assumption
that the nonlinear steady-state gain is independent of unmeasured disturbances.
Another assumption is that the process dynamics remain linear over the entire
range of operation. Asymmetric dynamics (e.g., dierent local time constants), as
a result, cannot be represented by this model.
3.1.3. First Principles Models Since empirical modeling approaches can be
unreliable and require tremendous amount of experimental data, two of the ven-
dors provide the option to use rst principles models. These products usually ask
the user to provide the rst principles models with some kind of open equation
editor, then the control algorithms can use the user-supplied models to calculate
future control moves. NOVA-NLC from DOT Products and Treiber Controls' OPB
(VanDoren, 1997) fall in this category.
Hybrid modeling approaches that combine rst principles knowledge with
empirical modeling are also found in the commercial packages. The Process Per-
fecter uses a combination of rst principles models in conjunction with empirical
models (Demoro, et al. 1997). The rst principles models can be steady-state bal-
ance equations, a nonlinear function of physical variables that generates another
physically meaning variable, such as production rate, or simply gain directions to
validate empirical models.
3.2. Output Feedback
In the face of unmeasured disturbances and model errors, some form of feedback
is required to remove steady-state oset. As discussed earlier, the most common
approach method for incorporating feedback into MPC algorithms involves com-
paring the measured and predicted process outputs [25]. The dierence between
the two is added to future output predictions to bias them in the direction of the
measured output. This can be interpreted as assuming that an unmeasured step
disturbance enters at the process output and remains constant for all future time.
For the case of a linear model and no active constraints, Rawlings, et al. [27] have
shown that this form of feedback leads to oset-free control. As can be seen in
Table 2, four of the ve NMPC algorithms described here provide the constant
output feedback option.
When the process has a pure integrator, the constant output disturbance
assumption will no longer lead to oset-free control. For this case it is common to
assume that an integrating disturbance with a constraint ramp rate has entered
at the output [25]. The PFC and Aspen Target algorithms provide this feedback
option.
It is well-known from linear control theory that additional knowledge about
unmeasured disturbances can be exploited to provide better feedback by designing
An Overview of Nonlinear Model Predictive Control Applications 13
a Kalman Filter [7]. Muske and Rawlings demonstrate how this can be accom-
plished in the context of MPC [19]. It is interesting to note that two of the NMPC
algorithms in Table 2 provide options for output feedback based on a nonlinear
generalization of the Kalman Filter known as the Extended Kalman Filter (EKF)
[26]. Aspen Target provides an EKF to estimate both a bias and a feedback gain.
NOVA-NLC uses an EKF to develop complete state and noise estimates.
3.3. Steady-State Optimization
The PFC, Aspen Target, MVC, and Process Perfecter controllers split the control
calculation into a local steady-state optimization followed by a dynamic optimiza-
tion. Optimal steady-state targets are computed for each input and output; these
are then passed to a dynamic optimization to compute the optimal input sequence
required to move toward these targets. From Table 2 it can be seen that these
calculations involve optimizing a quadratic objective that includes input and out-
put contributions. The exception is the NOVA-NLC controller that performs the
dynamic and steady-state optimizations simultaneously.
3.4. Dynamic Optimization
At the dynamic optimization level, an MPC controller must compute a set of
MV adjustments that will drive the process to the steady-state operating point
without violating constraints. All of the algorithms described here use a form of
the objective given in Eq. 5.
The PFC controller includes only the process input and output terms in the
dynamic objective, and uses constant weight matrices (Qj = Q, Rj = R, Sj = 0,
q = 2). The Aspen target and MVC products include all three terms with constant
weights (Qj = Q, Rj = R, Sj = S , q = 2). The NOVA-NLC product adds to this
the option of one norms (Qj = Q, Rj = R, Sj = S , q = 1; 2).
The Process Perfecter product uses a dynamic objective of the form,
XP ky
J= k+j ? ykd+j k2Qj
j =1
where yk+j is the desired target value for the output vector. This objective
d
function does not use move suppression or a reference trajectory (Rj = 0, Sj =
0,q = 2); instead it uses trajectory weighting that makes Qj gradually increase
over the horizon P . With this type of weighting, control errors at the beginning
of the horizon are less important than those towards the end of the horizon, thus
allowing a smoother control action.
3.5. Constraint Formulations
There are basically two types of constraints used in industrial MPC technology;
hard and soft [25]. Hard constraints are those which should never be violated. Soft
constraints allow the possibility of a violation; the magnitude of the violation is
generally subjected to a quadratic penalty in the objective function.
14 S. Joe Qin and Thomas A. Badgwell
All of the NMPC algorithms described here allow hard input maximum,
minimum, and rate of change constraints to be dened. These are generally dened
so as to keep the lower level MV controllers in a controllable range, and to prevent
violent movement of the MV's at any single control execution. The PFC algorithm
also accommodates maximum and minimum input acceleration constraints which
are useful in mechanical servo control applications.
The Aspen Target, MVC, NOVA-NLC, and Process Perfecter algorithms per-
form rigorous optimizations subject to the hard input constraints. The PFC algo-
rithm, however, enforces input hard constraints only after performing an uncon-
strained optimization. This is accomplished by clipping input values that exceed
the input constraints. It should be noted that this method does not, in general,
result in an optimal solution in the sense of satisfying the Karush-Kuhn-Tucker
(KKT) conditions for optimality.
The PFC, Aspen Target, MVC, and NOVA-NLC products enforce output
soft constraints as part of the dynamic optimization, as shown in Eq. 5. The
Aspen Target product also allows the option of hard output constraints in the
dynamic optimization; this is the only option available for output constraints in
the Process Perfecter. The use of hard output constraints is generally avoided in
MPC technology because a disturbance can easily cause such a controller to lose
feasibility.
Hard output constraints are handled in the Process Perfecter by using a frus-
tum method, as depicted in Figure 3(a). Compared to the typical hard constraint
formulation as shown in Figure 3(b), the frustum permits a larger control error
in the beginning of the horizon than in the end, but no error is allowed outside
the frustum. At the end of the horizon the frustum can have a non-zero zone,
instead of merging to a single line, which is determined based on the accuracy of
the process model to allow for model errors.
3.6. Output Trajectories
Industrial MPC controllers use four basic options to specify future CV behavior;
a setpoint, zone, reference trajectory or funnel [25]. All of the NMPC controllers
described here provide the option to drive the CV's to a xed setpoint, with
deviations on both sides penalized in the objective function. In practice this type
of specication is very aggressive and may lead to very large input adjustments,
unless the controller is detuned in some fashion. This is particularly important
when the model diers signicantly from the true process. For this reason all
of the controllers provide some way to detune the controller using either move
suppression, a reference trajectory, or time-dependent weights.
All of the controllers also provide a CV zone control option, designed to keep
the CV within a zone dened by upper and lower boundaries. A simple way to
implement zone control is to dene soft output constraints at the upper and lower
boundaries.
The PFC, Aspen Target, MVC, and NOVA-NLC algorithms provide a CV
reference trajectory option, in which the CV is required to follow a smooth path
An Overview of Nonlinear Model Predictive Control Applications 15
11111111111111111
00000000000000000
00000000000000000
11111111111111111
00000000000000000
11111111111111111
Constraint frustum
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
y(k+j)
00000000000000000
11111111111111111
00000000000000000
11111111111111111
00000000000000000
11111111111111111
k (a)
11111111111111111
00000000000000000
00000000000000000
11111111111111111
11111111111111111
00000000000000000
y(k+j)
00000000000000000
11111111111111111
k (b)
from its current value to the setpoint. Typically a rst order path is dened using
an operator entered closed loop time constant. In the limit of a zero time constant
the reference trajectory reverts back to a pure setpoint; for this case, however, the
controller would be sensitive to model mismatch unless some other strategy such
as move suppression is also being used. In general, as the reference trajectory time
constant increases, the controller is able to tolerate larger model mismatch.
Although not shown in the table, it has been observed that the size and
scope of NMPC applications are typically much smaller than that of linear MPC
applications [13]. This is likely due to the computational complexity of NMPC
algorithms.
shift/methanator module
carbon dioxide removal module
ammonia converter module
Poe and Munsif [23] describe the control modules in some detail; here we
focus only on the ammonia converter module.
The ammonia converter is a standard Kellogg \quench converter" design,
consisting of three nearly adiabatic catalyst beds, between which fresh feed is in-
troduced to cool the reaction products. The converter control module manipulates
the feed
ow to the rst bed as well as the quench
ows to all three beds. This
is done in order to maintain the three bed inlet temperatures at their optimal
steady-state targets. Output constraints considered by the control include bed
outlet temperatures and quench
ow valve positions. Feedforward control is pro-
vided for changes in feed
owrate, temperature, and pressure, hydrogen/nitrogen
ratio, and inert composition.
Figure 4 shows results for the second converter bed; results for the other beds
were similar. The MVC controller was able to signicantly reduce temperature
variations at the bed inlet and outlet, allowing the average reaction temperature
to be increased without violating the bed outlet constraint. Overall the plant's
net fuel consumption was lowered by 1.8%, while the net production of ammonia
increased by 0.7%.
20 S. Joe Qin and Thomas A. Badgwell
optimizer. In each experiment the polymer melt
ow rate was changed from 30 to
35. It was observed that the linear MPC was unable to accomplish the transition
within 30 minutes, since the process is highly nonlinear and the controller was
detuned to achieve stability. The nonlinear MPC algorithm was able accomplish
the transition very quickly. In the third experiment where an optimizer was used,
both the rate maximization and grade transition were accomplished within the
process constraints.
5.3. Aspen Target: Application to a Pulverized Coal Fired Boiler
Aspen Technology [4] reported an application of the Aspen Target to a pulverized
coal red boiler control. The objectives are to (i) improve boiler eciency, (ii)
reduce NOx emission, and (iii) reduce lose of ignition. The process consists of
pulverizers for crushing the coal to improving ring, boilers, and a turbine. The
coal burners are swirled type low-NOx burners with a boiler steam capacity of 650
tons/hour.
During coal combustion, moisture and oxygen are believed to dominate the
formation of NOx and the model relation is nonlinear. Without detailed knowledge
about the implementation, it is reported that the Aspen Target controller was
able to reduce NOx emission by 15-25%, increase boiler eciency by 0.1-0.3%,
and decrease loss of ignition by 2%.
7. Conclusions
We can draw the following conclusions.
The past three years have seen rapid progress in the development and
application of NMPC algorithms, with a total of 86 applications reported
by the vendors included in this survey.
The algorithms reported here dier in the simplications used to generate
a tractable control calculation; all of them, however, are based on adding
a nonlinear model to a proven MPC formulation.
None of the currently available NMPC algorithms includes the terminal
state constraints or innite prediction horizon required by control theory
for nominal stability; instead they rely implicitly upon setting the predic-
tion horizon long enough to eectively approximate an innite horizon.
The three most signicant obstacles to NMPC applications are: nonlinear
model development, state estimation, and rapid, reliable solution of the
control algorithm in real time.
Future needs for NMPC technology include development of a systematic
approach for nonlinear model identication, nonlinear estimation methods,
reliable numerical solution techniques, and better methods for justifying
NMPC applications.
Acknowledgments
The authors would like to thank Jacques Richalet of Adersa, Hong Zhao of Aspen
Technology, Humil Munsif, William Poe, and Ujjal Basu of Continental Controls,
Mike Morshedi of DOT Products, Jim Keeler, Steve Piche, and Doug Johnston of
Pavilion Technologies, and Steve Treiber of Treiber Controls for their cooperation
in providing information for this paper.
References
[1] P. Berkowitz and M. Papadopoulos. Multivariable process control method and ap-
paratus. U.S. Patent, March 7 1995. No. 5396416.
[2] P. Berkowitz, M. Papadopoulos, L. Colwell, and M. Moran. Multivariable process
control method and apparatus. U.S. Patent, Jan. 30 1996. No. 5488561.
[3] E. Demoro, C. Axelrud, D. Johnston, and G. Martin. Neural network modeling and
control of polypropylene process. In Society of Plastic Engineers Int'l Conference,
Houston, Texas, Feb. 23-26 1997.
[4] Aspen Technology, Inc. Pulverized coal red boiler optimization and control. Aspen
Technology Product Literature, 1998.
An Overview of Nonlinear Model Predictive Control Applications 23
[5] Continental Controls, Inc. Production description: MVC 3.0. Houston, Texas, August
1995.
[6] R. E. Kalman. Contributions to the theory of optimal control. Bull. Soc. Math. Mex.,
5:102{119, 1960.
[7] R. E. Kalman and R. S. Bucy. New results in linear ltering and prediction theory.
Trans. ASME, J. Basic Engineeering, pages 95{108, March 1961.
[8] J. Keeler, G. Martin, G. Boe, S. Piche, U. Mathur, and D. Johnston. The process
perfecter: The next step in multivariable control and optimization. Technical report,
Pavilion Technologies, Inc., 1996.
[9] S. S. Keerthi and E. G. Gilbert. Optimal innite-horizon feedback laws for a general
class of constrained discrete-time systems: Stability and moving-horizon approxima-
tions. J. Optim. Theory Appl., 57(2):265{293, May 1988.
[10] L. S. Lasdon and A. D. Waren. GRG2 User's Guide. Department of Computer and
Information Science, Cleveland State University, Cleveland, Ohio, 1986.
[11] E. B. Lee and L. Markus. Foundations of Optimal Control Theory. John Wiley and
Sons, New York, 1967.
[12] G. Martin, G. Boe, J. Keeler, D. Timmer, and J. Havener. Method and apparatus
for modeling dynamic and steady-state processes for prediction, control and opti-
mization. US Patent, 1998.
[13] G. Martin and D. Johnston. Continuous model-based optimization. In Process Op-
timization Conference, Houston, Texas, March 24-26 1998. Gulf Publishing Co. and
Hydrocarbon Processing.
[14] D. Q. Mayne. Nonlinear model predictive control: An assessment. In Jerey C. Kan-
tor, Carlos E. Garcia, and Brice Carnahan, editors, Fifth International Conference
on Chemical Process Control, pages 217{231. AIChE and CACHE, 1997.
[15] Edward S. Meadows, Michael A. Henson, John W. Eaton, and James B. Rawlings.
Receding horizon control and discontinuous state feedback stabilization. Int. J. Con-
trol, pages 1217{1229, 1995.
[16] Edward S. Meadows and James B. Rawlings. Model Predictive Control. In Michael A.
Henson and Dale E. Seborg, editors, Nonlinear Process Control, chapter 5, pages
233{310. Prentice Hall, 1997.
[17] Hanna Michalska and David Q. Mayne. Robust receding horizon control of con-
strained nonlinear systems. IEEE Trans. Auto. Cont., 38(11):1623{1633, 1993.
[18] Manfred Morari and Jay H. Lee. Model Predictive Control: Past, Present and Future.
In Proceedings of PSE/Escape '97, Trondheim, Norway, 1997.
[19] Kenneth R. Muske and James B. Rawlings. Model predictive control with linear
models. AIChE J., 39(2):262{287, 1993.
[20] B. A. Ogunnaike and R. A. Wright. Industrial applications of nonlinear control. In
Jerey C. Kantor, Carlos E. Garcia, and Brice Carnahan, editors, Fifth International
Conference on Chemical Process Control, pages 47{59. AIChE and CACHE, 1997.
[21] N.M.C. Oliveira and L.T. Biegler. Constraint handling and stability properties of
model-predictive control. AIChE J., 40:1138{1155, 1994.
[22] N.M.C. Oliveira and L.T. Biegler. An extension of Newton-type algorithms for non-
linear process control. Automatica, 31:281{286, 1995.
24 S. Joe Qin and Thomas A. Badgwell
[23] William Poe and Himal Munsif. Benets of advanced process control and economic
optimization to petrochemical processes. Hydrocarbon Processing's Process Opti-
mization Conference and Exhibition, March 1998.
[24] D. M. Prett and R. D. Gillette. Optimization and constrained multivariable control of
a catalytic cracking unit. In Proceedings of the Joint Automatic Control Conference,
1980.
[25] S. Joe Qin and Thomas A. Badgwell. An Overview of Industrial Model Predictive
Control Technology. In Jerey C. Kantor, Carlos E. Garcia, and Brice Carnahan,
editors, Fifth International Conference on Chemical Process Control, pages 232{256.
AIChE and CACHE, 1997.
[26] W. Fred Ramirez. Process Control and Identication. Academic Press, New York,
New York, 1994.
[27] James B. Rawlings, Edward S. Meadows, and Ken Muske. Nonlinear model predictive
control: a tutorial and survey. In Proceedings of IFAC ADCHEM, Japan, 1994.
[28] J. Richalet, A. Rault, J. L. Testud, and J. Papon. Model predictive heuristic control:
Applications to industrial processes. Automatica, 14:413{428, 1978.
[29] G.B. Sentoni, J.P. Guiver, H. Zhao, and L.T. Biegler. State space nonlinear process
modeling: identication and universality. Revised for AIChE Journal, March 1998.
[30] Vance J. VanDoren. Multivariable controllers enter the mainstream. Control Engi-
neering, pages 107{112, March 1997.
[31] Steven J. Wright. Applying New Optimization Algorithms to Model Predictive Con-
trol. In Jerey C. Kantor, Carlos E. Garcia, and Brice Carnahan, editors, Chemical
Process Control: Assessment and New Directions for Research, AIChE Symposium
Series 316, pages 147{155. AIChE and CACHE, 1997.
[32] H. Zhao, J.P. Guiver, and G.B. Sentoni. An identication approach to nonlinear
state space model for industrial multivariable model predictive control. In American
Control Conference, Philadelphia, PA, June 21-24 1998.
S. Joe Qin, Department of Chemical Engineering, The University of Texas at
Austin, Austin, TX 78712 and Thomas A. Badgwell, Chemical Engineering De-
partment, MS-362, 6100 Main Street, Rice University, Houston, TX 77005-1892
E-mail address : [email protected] and [email protected]