0% found this document useful (0 votes)
10 views7 pages

Kernel Learning Adaptive One-Step-Ahead Predictive Control For Nonlinear Processes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Kernel Learning Adaptive One-Step-Ahead Predictive Control For Nonlinear Processes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING

Asia-Pac. J. Chem. Eng. 2008; 3: 673–679


Published online in Wiley InterScience
(www.interscience.wiley.com) DOI:10.1002/apj.201

Special Theme Research Article


Kernel learning adaptive one-step-ahead predictive control
for nonlinear processes
Yi Liu, Haiqing Wang* and Ping Li
State Key Laboratory of Industrial Control Technology, Institute of Industrial Process Control, Zhejiang University, Hangzhou, 310027, P. R. China

Received 23 July 2008; Accepted 24 July 2008

ABSTRACT: A Kernel learning adaptive one-step-ahead Predictive Control (KPC) algorithm is proposed for the general
unknown nonlinear processes. The main structure of the KPC law is twofold. A one-step-ahead predictive model is
first obtained by using the kernel learning (KL) identification framework. An analytical control law is then derived
from Taylor linearization method, resulting in an efficient computation for on-line implementation. The convergence
analysis of the KPC control strategy is presented, meanwhile a new concept of adaptive modification index is proposed
to improve the tracking ability of KPC and reject the unknown disturbance. This simple KPC scheme has few parameters
to be chosen and small computation scale, which make it very suitable for real-time control. Numerical simulations
compared with a well-tuned proportional-integral-derivative (PID) controller on a nonlinear chemical process show
the new KPC algorithm exhibits much better performance and more satisfactory robustness to both additive noise and
unknown process disturbance.  2008 Curtin University of Technology and John Wiley & Sons, Ltd.

KEYWORDS: nonlinear processes; kernel learning; adaptive control; predictive control

INTRODUCTION attention in the field of nonlinear process modeling


and control.[8 – 13] Some SVM model based nonlinear
Many chemical processes are inherently nonlinear control algorithms have been proposed.[14 – 16] How-
dynamic systems. For the control of such processes, ever, there exist some technical difficulties in these
the linear-based control techniques and classical propor- new control schemes. The control strategy proposed
tional-integral-derivative (PID) controllers are some- by Iplikci[16] requires intensive computation. The con-
times found insufficient, and hence alternate nonlinear trollers designed by Zhong et al .[15] and Bao et al .[14]
control strategies need to be explored to obtain satisfac- may be possibly rendered invalid when their proposed
tory results. This has led to an increasing interest in the quadratic polynomial or linear kernel functions based
controller design for unknown nonlinear dynamic pro- SVM model cannot describe well the nonlinear dynam-
cesses. However, due to the absence of general method- ics. As to the process control applications, on the other
ology, the design of nonlinear controllers is a difficult hand, it is desirable to keep the control strategy as sim-
task. For nonlinear processes, neural network (NN) ple as possible for the real-time implementation.
based adaptive or predictive control techniques have In this paper, a new KL adaptive one-step-ahead pre-
been intensively studied in the last two decades.[1 – 7] dictive control (KPC) algorithm with an analytical form
However, there are still no guarantees of high con- is presented for the general nonlinear systems. After a
vergence speed, avoidance of local minima, and the
brief review of nonlinear generalized minimum vari-
overfitting phenomenon; meanwhile, no general meth-
ance (NGMV) control law in the section on NGMV
ods to choose the number of hidden units for common
Control Law, the main structure of this KPC is formu-
NN are available.
Recently, support vector machines (SVM), which are lated in the section on Proposed Control Law: KPC,
novel powerful machines that aid a learning method which includes two technical parts. First, a one-step-
based on statistical learning theory (SLT) and ker- ahead predictive model is obtained by KL identification
nel learning (KL) technique, are gaining widespread framework; second, the control law is derived based on
the first-order Taylor approximation method, resulting
in an analytical control law with effective computation
*Correspondence to: Haiqing Wang, State Key Laboratory of Indus- for the real-time implementation. The convergence anal-
trial Control Technology, Institute of Industrial Process Control,
Zhejiang University, Hangzhou, 310027, P. R. China. ysis of this control strategy is presented in the section
E-mail: [email protected] on Convergence Analysis and Corresponding Adaptive
 2008 Curtin University of Technology and John Wiley & Sons, Ltd.
674 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

Control Strategy; meanwhile, a new concept of adap- point u(k − 1), meanwhile neglecting the higher-order
tive modification index (AMI) is obtained to achieve a terms.[2] Then we have
good tracking performance. This simple KPC scheme
has few parameters to be chosen beforehand including y(k + 1) = f [x̃ (k )]
a small computation scale, which make it very suitable 
for nonlinear real-time control. Application of the pro- ∂f 
+ u(k ) + O(u(k ))
posed KPC algorithm to a nonlinear chemical process ∂u(k ) u(k )=u(k −1)
is illustrated in the section on Simulation Results and 
∂f 
the conclusions are drawn in the final section. ≈ f [x̃ (k )] + u(k ) (5)
∂u(k ) u(k )=u(k −1)

NGMV CONTROL LAW where, x̃ (k ) = [Y (k ), u(k − 1), U (k − 1)] and O


(u(k )) is the higher-order terms of u(k ). Substi-
For simplicity, we limit our discussion only to single- tuting Eqn (5) into Eqn (3) and then minimizing it, we
input–single-output (SISO) nonlinear processes. The can obtain the following NGMV control law[2]
extension of the proposed method to multi-input–multi- 
output (MIMO) cases, however, is straightforward and ∂f 
will not be discussed here. Many SISO nonlinear pro- ∂u(k ) u(k )=u(k −1)
cesses can be accurately represented by the following u(k ) = u(k − 1) +   2
discrete model: ∂f 
λ+
∂u(k ) u(k )=u(k −1)
y(k + 1) = f [y(k ), · · · , y(k − ny + 1),
× {yr (k + 1) − f [x̃ (k )]} (6)
u(k ), · · · , u(k − nu + 1)] (1)

∂f 
∂u(k ) u(k )=u(k −1)
where is the input–output sensitiv-
where k is the discrete time, y(k ) and u(k ) represent
the controlled output and the manipulated input, respec- ity function, and f [x̃ (k )] is the quasi-one-step-ahead
tively, f (·) is the general nonlinear function vector, and predictive output.
ny and nu denote the process orders. Equation (1) can
be rewritten compactly as
PROPOSED CONTROL LAW: KPC
y(k + 1) = f [Y (k ), u(k ), U (k − 1)] = f [x (k )] (2)
To implement
 the control law described earlier,
where Y (k ) = [y(k ), · · · , y(k − ny + 1)] and U (k − ∂f 
∂u(k ) u(k )=u(k −1)
1) = [u(k − 1), · · · , u(k − nu + 1)] are the vectors con- and f [x̃ (k )] must be calculated on
sisting of the past process outputs and the past process line. It is necessary to develop an effective method to
inputs, respectively. estimate these two quantities. Several NN based meth-
Let yr (k ) be the process desired output. Then the ods were used to provide them to the controller.[2] How-
control law can be obtained by minimizing the one-step- ever, as mentioned earlier, NN still has a number of
ahead weighted predictive control performance index[17] weak points, furthermore, NN models are generally not
parsimonious and hence any adaptive control scheme
J [u(k )] = [yr (k + 1) − y(k + 1)]2 based on them has to deal with the issue of updat-
ing a very large number of weights. On the contrary, a
+ λ[u(k ) − u(k − 1)]2 (3)
KL identification model can describe the nonlinear sys-
s.t. umin (k ) ≤ u(k ) ≤ umax (k ) tem well while exhibiting good generalization ability,
, (4)
umin (k ) ≤ u(k ) ≤ umax (k ) especially with very few samples.[10] Hence we suggest
utilization of the KL method to identify the nonlinear
where u(k ) = u(k ) − u(k − 1) is the manipulated systems. Based on SLT and KL theory, a unified KL
variable increment, which is subject to Eqn (4) and identification model can be expressed as
λ (λ > 0) denotes the control effort weighting factor.
It is difficult to get the optimal solution for Eqn (3)
because it needs to be solved by a nonlinear optimiza- ym (k + 1) = KL[x (k ), α(k )] (7)
tion method. For process control, it is desirable to obtain
a simple analytical control law, even though it may where α(k ) is the KL model coefficient vector
be sub-optimal, so that the computation requirement and ym (k + 1) is the model predictive output. α(k )
can be greatly reduced. We can expand Eqn (2) by the can be obtained by using batch learning or on-
Taylor series with respect to the argument u(k ) at the line learning.[8,10,13] When a support vector regression
 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 675

(SVR) or a least squares SVR is adopted, the KL iden- where h is the error correction coefficient, and in
tification model can be expressed as a uniform formula most cases h = 1. Consequently, the KPC law can be
ultimately formulated as
ym (k + 1) = KL[x (k ), α(k )] 
∂KL 

NSV µ(k )
= αi K x (i ), x (k ) + b (8) ∂u(k ) u(k )=u(k −1)
u(k ) = u(k − 1) +   2
i =1
∂KL 
λ+
where αi denote general Lagrange multipliers, which ∂u(k ) u(k )=u(k −1)
are the linear combinations of Lagrange multipliers;
NSV is the number of support vectors; ·, · denotes the × E (k + 1) (12)
dot product and K x (i ), x (k ) is a kernel function that
handles the inner product in the feature space and hence where E (k + 1) = yr (k + 1) − e(k ) − KL[x̃ (k )] is the
the explicit form of nonlinear mapping does not need total error of KL predictive model at time k .
to be known; b is the bias term.[10,11] Then substituting
the KL model into Eqn (6) yields the original KPC law
 CONVERGENCE ANALYSIS AND
∂KL  CORRESPONDING ADAPTIVE CONTROL
∂u(k ) u(k )=u(k −1) STRATEGY
u(k ) = u(k − 1) +   2
∂KL  It is well known that it is extremely important to guaran-
λ+
∂u(k ) u(k )=u(k −1) tee a control law convergent. Thus, we investigate the
proposed KPC law detailed and obtain the following
× {yr (k + 1) − KL[x̃ (k )]} (9) theorem.
Theorem 1. There exists suitable µ(k ) such that the
To compensate both the Taylor approximation and control algorithm given in Eqn (12) will be convergent.
the identification error, an adaptive modification item Proof: Let
(AMI) µ(k ) is introduced to the control law in Eqn (9).
Then the KPC law can be rewritten as (k + 1) = yr (k + 1) − y(k + 1) (13)

∂KL 
µ(k ) and
∂u(k ) u(k )=u(k −1) 
u(k ) = u(k − 1) +   2 ∂KL 
∂KL  µ(k )
∂u(k ) u(k )=u(k −1)
λ+
∂u(k ) u(k )=u(k −1) δ(k ) =   2 E (k + 1). (14)
∂KL 
× {yr (k + 1) − KL[x̃ (k )]} λ+
∂u(k ) u(k )=u(k −1)
(10)

The proposed AMI in Eqn (10) is much different


from the adjustable parameter in Gao et al .[2] although Note that y(k + 1) = f [x (k )] = KL[x (k )] + e(k ),
they seem somewhat similar. This is because AMI is then from the mean-value theorem, we have
time varying, while the latter proposed by Gao et al .[2]
ε(k + 1) = yr (k + 1) − e(k ) − KL[x̃ (k )]
is a constant and has to be selected by simulation. 
Furthermore, AMI can be adaptively obtained according ∂KL 
− δ(k ) (15)
to the convergence analysis at every sampling time, ∂u(k )  u(k )=u(k −1)
which guarantees advanced tracking ability all the
time, especially when the system suffers from unknown where u(k − 1) ∈ [u(k − 1), u(k − 1) + δ(k )]. Substi-
disturbances. tuting Eqn (15) into Eqn (14) yields
Moreover, to overcome the model mismatch by the
KL identification and other unknown disturbances, it ε(k + 1) =
is necessary to utilize the latest measured output y(k )  
   
to compensate them. Here we use a simple but useful 
 ∂KL  ∂KL  


 µ(k )   

strategy such as the output feedback in model predictive   
∂u(k ) u(k )=u(k −1) ∂u(k ) u(k )=u(k −1) 
control.[18] By adding the error e(k ) = y(k ) − ym (k ) to 1−   2

 

the quasi-one-step-ahead predictive output KL[x̃ (k )], a 
 ∂KL  

corrected prediction is obtained 
 λ +  

∂u(k ) u(k )=u(k −1)
yp (k + 1) = KL[x̃ (k )] + he(k ) (11) × E (k + 1) (16)
 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj
676 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

Thus, there exists suitable AMI when 1. It is easy to obtain an accurate nonlinear predictive
model with good generalization by KL identification
  2 methodology.
∂KL 
λ+ 2. The control law can be modified adaptively by AMI
∂u(k ) u(k )=u(k −1) µ(k ) to keep high tracking performance, especially
µ(k ) =   (17)
∂KL  ∂KL  when the process is time varying or suffers from
∂u(k ) u(k )=u(k −1) ∂u(k ) u(k )=u(k −1)
unknown disturbances.
3. A simple analytical control law makes real-time
computation effective.
one can always make ε(k + 1) = 0. That is to say the
KPC law given in Eqn (12) is convergent. Two common kernel functions, Gaussian and poly-
To obtain a reliable estimate of µ(k ), we design nomial, are usually utilized in KL methods.[10] When
a simple recursive efficient method. According to a Gaussian kernel function is adopted: K (xi , xj ) =
Eqns (9) and (10), µ(0) is set to 1 and then µ(k ) is exp(−||xi − xj ||2 /σ 2 ), where σ is the Gaussian kernel
replaced by µ(k − 1) in Eqn (14) at time k to obtain width, the KL identification model can be formulated
the estimation of δ(k ). Let u(k − 1) = [u(k − 1) + as
u(k )]/2 for more precision; at last the actual AMI
at time k is calculated in Eqn (17). By substituting
Eqn (17) into Eqn (12), the convergent and adaptive ym (k + 1) = KL[x (k )]
control law at time k can be deduced 
NSV
= αi exp(−||x (i )
u(k ) = u(k − 1) + E (k + 1) i =1

  − x (k )||2 /σ 2 ) + b (19)
∂KL 
(18)
∂u(k ) u(k )=u(k −1) From Eqn (19) we can obtain

∂KL  2 
Figure 1 shows the whole KPC strategy. TDL denotes NSV
= αi exp(−||x (i ) − x (k )||2
∂u(k ) u(k )=u(k −1)
the common time delay and GTDL is defined as a gen- 2
eral time delay, through which x̃ (k ) = [Y (k ), u(k − σ i =1
1), U (k − 1)] is obtained. KPC is composed of two pri-  2
σ )[xny +1 (i ) − u(k − 1)] (20)
mary modules: a KL predictive model and an adaptive
controller. The flowchart of this simple strategy is as
follows: at time k , a corrected KL predictive model where x (k ) = [Y (k ), u(k − 1), U (k − 1)], xny +1 (i ) is
is obtained by adding the latest error e(k ) to the quasi- the ny + 1 item of x (i ) vector. Thus, a Gaussian kernel
one-step-ahead predictive output KL[x̃ (k )], and then the function based KPC law is expressed below
total error E (k + 1) and the AMI µ(k ) are both intro-
u(k ) = u(k − 1)+
duced into the controller to compute the process control
law u(k ). E (k + 1)
. (21)
There are three main advantages in our proposed 
NSV
αi exp(−||x (i ) − x (k )|| 2
σ )[xny +1 (i ) − u(k − 1)]
2
KPC law: i =1

Figure 1. Flowsheet of KPC.


 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 677

When a polynomial kernel function is chosen: q,Ca 0,T0 q,Ca ,T


K (xi , xj ) = (xi , xj  + τ )p , where the integer p is the
polynomial degree and p ≥ 1 and τ = 1 in most cases.
Similarly, we can deduce the polynomial kernel func-
tion based KL identification model and its correspond-
ing KPC law as follows:


NSV
KL[x (k )] = αi (x (i ), x (k ) + τ )p + b (22)
i =1 qc ,Tc 0

∂KL 
∂u(k ) u(k )=u(k −1) Figure 2. Schematic of the CSTR. This
figure is available in colour online at

NSV
www.apjChemEng.com.
=p αi (x (i ), x (k ) + τ )p−1 xny +1 (i ) (23)
i =1
u(k ) = u(k − 1) 103.41 l min−1 . The other meanings and nominal val-
E (k + 1) ues of the variables in the above equations can be
+ (24) referred to Nahas et al .[5] Under the input constraint,

NSV
90 l min−1 ≤ qc ≤ 110 l min−1 , the control objective
p αi (x (i ), x (k ) + τ )p−1 xny +1 (i )
is to regulate Ca by manipulating qc .
i =1
The sampling period of all process measurements
is 6 s. For the identification procedure a sequence of
only 500 samples, which is much less than NN based
SIMULATION RESULTS identification method, [7]
is generated to form the iden-
 
tification set S = {x (i ), y(i + 1)}Ni=1 , x ∈ Rn , y ∈ R ,
As an example to illustrate the validity of the proposed where n = nu + ny is the order of input vector, which
KPC algorithm, we consider a highly nonlinear continu- is chosen as x (k ) = [y(k ), y(k − 1), y(k − 2), u(k ),
ous stirred tank reactor (CSTR) process, which is known u(k − 1), u(k − 2)] according to Lightbody and Irwin[7]
for its significant nonlinear behavior and exhibits multi- and Iplikci.[16] The simulation environment is Matlab
ple steady states and poses a difficult control problem.[5] V7.1 with CPU main frequency 2.4 GHz and 256 M
This complex CSTR process has been studied with memory. Without lost generality, an offline LSSVR
other control strategies, e.g. NN based nonlinear internal identification model is used with polynomial kernel
model control[7] and SVM based generalized predic- function, and the regularization parameter γ = 103 and
tive control.[16] Figure 2 is the schematic of this CSTR polynomial degree p = 3 are chosen by using cross-
process, in which an exothermic irreversible first-order validation approach. It only takes us several minutes to
reaction takes place. obtain this satisfactory KL identification model. Com-
The concentration Ca inside the reactor is controlled pared with the other methods, the KL identification is
by manipulating the coolant flow qc through the jacket. much easier to implement.
Under the mass and energy balance, the dynamics of
this CSTR process can be described as follows:

dCa (t) q Case 1: set-point tracking


= [Ca0 (t) − Ca (t)] − k0 Ca (t) exp
dt V
  Firstly, the set-point tracking ability of KPC is investi-
−E gated. To provide a suitable comparison with standard
× (25)
RT (t) techniques, a well tuned PID controller with param-
  eters (Kc , Ti , Td ) = (190, 0.056, 0.827) is used.[5] As
dT (t) q k0 H −E
= [T0 − T (t)] − Ca (t) exp mentioned above, a larger λ implies more penalties on
dt V ρCp RT (t) u(k ), and vice versa. With the action of AMI, λ can
  
ρc Cpc −ha be selected in a wide range and the control perfor-
+ qc (t) 1 − exp mance will not degrade. The integral of the absolute
ρCp V qc (t)ρc Cpc
set-point tracking error (IAE) is used to quantify the
[Tc0 − T (t)] (26) performance characteristics of both controllers. Figure 3
depicts the details of the performance comparison of
The nominal conditions for a product concentra- both controllers. The IAE performance index is also
tion Ca = 0.1 mol l−1 are: T = 438.54 K and qc = shown in Fig. 3. When we select λ = 1−8 , the IAE of
 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj
678 Y. LIU, H. WANG, AND P. LI Asia-Pacific Journal of Chemical Engineering

KPC is 0.039 which is obviously much smaller than KPC is more robust against additive noise and unknown
the PID controller (IAE = 0.164). When λ = 1−4 or disturbance. For details about the IAE comparison is
λ = 1, KPC can adaptively obtain the same control tabulated in Table 1. All the simulation results are the
performance due to the effect of AMI (In both cases average of 20 running times.
IAE = 0.098). The KPC controller with different λ can
track the set-point quickly and with little overshoot.
Moreover, the running time of KPC in one sample
Case 3: a ‘stair’ reference tracking
is about 0.05 s which is much less than the sampling
period of this CSTR process (6 s) and almost equals to
Furthermore, when the proposed controller is working
PID (0.03 s). So we can conclude that the KPC strat-
in a large operating range, is it still valid? Fig. 5 shows
egy presents much better performance than the PID
the system response of a ‘stair’ reference starting on
controller.
0.085 mol/l and ending at 0.12 mol/l; each step has a
duration of 10 min and amplitude of 0.005 mol/l. The
value of λ is unchanged and kept the same as in Case
Case 2: noise and disturbance rejection 2 to validate its robustness. As can be appreciated, the
In order to mimic a realistic situation the system is
subjected to additive noise and unmeasured disturbance. 0.14
Different magnitudes of the super-imposed Gaussian
noise and disturbance are discussed. The satisfactory
0.13
performance is achieved when λ is varying in a wide
range just as analyzed aforementioned. And we choose
λ = 1−5 in this case. It is shown in Fig. 4 that the Ca, mol⋅l-1
0.12

0.11
0.12
0.1
Ca, mol⋅l-1

0.11
yr
0.09
PID (0.164) Reference Trajectory
l=1-8 (0.039) KPC
0.1 PID
-4
l=1 (0.098) 0.08
l=1 (0.039) 0 20 40 60 80 100 120 140 160 180 200
110
Sample Number
qc, l⋅min-1

100
Figure 4. System response with both noise and dis-
90 turbance. This figure is available in colour online at
www.apjChemEng.com.
105
AMI

100
0 20 40 60 80 100 0.125
Reference Trajectory
Sample Number 0.12 KPC
PID
Figure 3. System response to the set-point tracking with 0.115
different weighting factors. This figure is available in colour 0.11
online at www.apjChemEng.com.
Ca, mol⋅l-1

0.105
0.1
Table 1. IAE comparison of KPC and PID with different
noises and disturbances. 0.095
0.09
Simulation Control Low Normal High
conditions strategy noise noise noise 0.085

Low disturbance KPC 0.296 0.411 0.536 0.08


PID 0.446 0.504 0.541 0 100 200 300 400 500 600 700 800
Normal disturbance KPC 0.469 0.585 0.683 Sample Number
PID 0.698 0.735 0.779
High disturbance KPC 0.772 0.878 0.950 Figure 5. System response to tracking a ‘stair’ set-
PID 1.001 1.058 1.116 point. This figure is available in colour online at
www.apjChemEng.com.
 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj
Asia-Pacific Journal of Chemical Engineering KERNEL LEARNING PREDICTIVE CONTROL FOR NONLINEAR PROCESSES 679

simple control strategy still provides excellent set-point REFERENCES


tracking over the nonlinear operating range, with high
closed-loop bandwidth. Importantly the performance of [1] M.S. Ahmed. IEEE Trans. Automat. Contr., 2000; 45(1),
119–124.
the closed-loop system at the lower end of the operating [2] F.R. Gao, F.L. Wang, M.Z. Li. Chem. Eng. Sci., 2000; 55,
region is similar to that at the upper end. Whereas, we 1283–1288.
can remark the degradation in the performance under [3] K.J. Hunt, D. Sbarbaro, R. Zbikowski, P.J. Gawthrop. Auto-
matica, 1992; 28(6), 1083–1112.
the PID controller when the concentration is smaller [4] C.H. Lu, C.C. Tsai. J. Process Control, 2007; 17(1), 83–92.
than 0.09 mol/l or larger than 0.11 mol/l. [5] E.P. Nahas, M.A. Henson, D.E. Seborg. Comput. Chem. Eng.,
1992; 16(12), 1039–1057.
[6] K.S. Narendra, K. Parthasarathy. IEEE Trans. Neural Netw.,
1990; 1(1), 4–27.
CONCLUSIONS [7] G. Lightbody, G.W. Irwin. IEEE Trans. Neural Netw., 1997;
8(3), 553–567.
[8] Y. Liu, D.C. Yang, H.Q. Wang, P. Li. Modeling of fermenta-
This paper addresses the subject of nonlinear control tion processes using online kernel learning. In Proceedings of
and presents a new simple analytical control strategy. 17th IFAC World Congress, Seoul, 2008,; 9679–9684.
[9] V. Vapnik. The Nature of Statistical Learning Theory,
The simplicity of the identification by KL methodology Springer-Verlag: New York, 1995.
and the excellent performance of the KPC controller [10] B. Schölkopf, A.J. Smola. Learning with Kernels, The MIT
make the strategy very attractive for nonlinear process Press: Cambridge, MA, 2002.
[11] J.A.K. Suykens, T. van Gestel, J. de Brabanter, B. de Moor,
control. The proposed KPC law can be easily imple- J. Vandewalle. Least Squares Support Vector Machines, World
mented; in addition, the AMI assures the convergence Scientific: Singapore, 2002.
of this algorithm. The simulations on such a severely [12] H.T. Toivonen, S. Totterman, B. Akesson. Int. J. Control,
2007; 80(9), 1454–1470.
nonlinear system as CSTR process show that the pro- [13] H.Q. Wang, P. Li, F.R. Gao, Z.H. Song, S.X. Ding. AIChE J.,
posed algorithm has a good tracking performance and 2006; 52(10), 3515–3531.
a satisfactory robustness to both additive noise and [14] Z.J. Bao, D.Y. Pi, Y.X. Sun. Chin. J. Chem. Eng., 2007; 15(5),
691–697.
unknown disturbance. Furthermore, as the KL identi- [15] W.M. Zhong, G.L. He, D.Y. Pi, Y.X. Sun. Chin. J. Chem.
fication model can describe the nonlinear systems well Eng., 2005; 13(3), 373–379.
and with good generalization using a small sample set, it [16] S. Iplikci. Int. J. Robust Nonlinear Cont., 2006; 16, 843–862.
[17] G.C. Goodwin, K.S. Sin. Adaptive Filtering Prediction and
can also be expected that KL controller, e.g. not limited Control, Prentice-Hall: Englewood Cliffs, NJ, 1984.
to utilize the KL methodology to obtain process model, [18] S.J. Qin, T.A. Badgwell. Control Eng. Pract., 2003; 11(7),
is a promising method for nonlinear process control. 733–764.

Acknowledgements

This work was sponsored by the National Natural Sci-


ence Foundation of China (Project No. 20576116),
National Key Technology R&D Program of China
(Project No. 2007BAF14B02) and Alexander von Hum-
boldt Foundation, Germany (Dr Haiqing Wang), which
are gratefully acknowledged.

 2008 Curtin University of Technology and John Wiley & Sons, Ltd. Asia-Pac. J. Chem. Eng. 2008; 3: 673–679
DOI: 10.1002/apj

You might also like