0% found this document useful (0 votes)
63 views4 pages

RBF Neural Networks and Its Application in Establishing Nonlinear Self-Tuning Model

This document discusses the use of radial basis function (RBF) neural networks to establish self-tuning nonlinear models. It begins by introducing RBF neural networks and their advantages over traditional backpropagation networks. It then describes the structure and algorithm of RBF networks, including how to determine the number and location of centers. As an example, the document establishes an online self-tuning model of temperature for a reactor process using RBF networks and compares its performance to models developed with backpropagation networks and regression analysis. The RBF network model showed the best performance according to error metrics on both training and test data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views4 pages

RBF Neural Networks and Its Application in Establishing Nonlinear Self-Tuning Model

This document discusses the use of radial basis function (RBF) neural networks to establish self-tuning nonlinear models. It begins by introducing RBF neural networks and their advantages over traditional backpropagation networks. It then describes the structure and algorithm of RBF networks, including how to determine the number and location of centers. As an example, the document establishes an online self-tuning model of temperature for a reactor process using RBF networks and compares its performance to models developed with backpropagation networks and regression analysis. The RBF network model showed the best performance according to error metrics on both training and test data.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1

RBF Neural Networks And Its Application In Establishing


Nonlinear Self-tuning Model

Pan Lideng, Huang Xiaofeng, Ma Junying, Pan Yuying


( Dept. of Chemical Automation, Beijing University
of Chemical Technology, Beijing 100029, China)
e-mail address: [email protected]




Abstract: The principle and algorithm of neural networks using radial basis function
(RBF) are discussed in this paper. The recursive least squares method is used to resolve the
self-tuning problem of RBF neural network so that self-tuning models of nonlinear
time-varying system is obtained. By using RBF neural networks, a self-tuning model of a
reactor is established and compared with a BP neural networks model and a regression
model. The results show that the RBF neural networks model is effective.

Keywords: Neural network, Radial Basis Function Networks, Nonlinear Models,
Self-tuning, Time-varying System, Identification, On-line




1. INTRODUCTION

Although artificial neural network was introduced
into the field of automatic control not long before, it
is highly successful in modeling complex processes
which are difficult to establish with traditional
methods. There is less analysis work of data and
modeling work to process information using neural
network, which implement complex nonlinear
mapping with non-programmed and adaptive method,
and combine the information storing and processing.

At present, multi-layered feed-forward network
based on BP algorithm is the most extensively used
in nonlinear process modeling and identification.
However, Problems of slow convergence speed and
local minimum involved in BP algorithm prevent its
industrial application, specially in establishing
on-line self-tuning model.

Radial basis function network presented by Chen, et
al. (1990), provides a new method of on-line
identification and modeling, using linear
optimization methods such as least squares and so on.
As linear optimization algorithm is used, RBF
networks ensure global convergence. In this paper,
recursive least squares method is used to implement
the self-tuning neural network so that self-tuning
model of time-varying nonlinear system can be
obtained.



2. STRUCTURE AND ALGORITHM OF RBF
NEURAL NETWORK

RBF neural networks are static networks connected
by neurons, which have the linear system of h(t) =
(t). It has connection only between neurons of
neighboring layers, and signals are propagated from
lower layers to higher layers. A typical RBF neural
network is depicted in Fig.1. The first layer performs
a fixed non-linear transformation which maps the
input n-dimensional space onto a new space, and the
output layer implements a linear combiner on this
new space.

A RBF expansion clearly describes the relation
btween the input and output of the network:

fr

(X(k) ) =
0
+
i
i
nr
=

1
( X C
i
) (1)

where X = [x
1
, , x
n
]
T
R
n
represents n inputs of
the network, denotes the Euclidean norm,
i
(0
i nr) are the weights, C
i
R
n
(1 i nr) are the
RBF centres. The centres are some fixed points in
n-dimensional space, and must be appropriately
chose from the input domain; nr is the number of
centres. () is a function from R
+
to R . Typical
choices for () are:

____________________________________________________________________________ www.paper.edu.cn

2
() =
2
log( ) (2-a)
() = exp( )
2 2
(2-b)
() = ( )
/

2 2 1 2
+ (2-c)
() = 1
2 2 1 2
( ) +
(2-d)

x1 x2 xn
fr(x)
1
0
1

2
nr
output layer
centres
input layer
Fig.1 Structure of a RBF network


Cybenko(1989) has rigorously proved that the
two-layered feed forward neural network can
uniformly approximate any continuous function.
RBF networks have strong biological background
and can offer capabilities of approximating any
non-linear function. With linear relation between link
weights and output, it is possible that the linear
optimization methods can be used to ensure global
convergence. The crucial problem is then how to
select centres appropriately(Feng, 1994).

According to the feature of nonlinear identification,
statistical F test is used to choose the number of
centres, and K mean method is used for locating the
centres(Bian, 1988).

The basis of K mean method is the criterion for the
sum of errors squares. Suppose that training set
consists of M samples. If N
i
is the number of
samples in i
th
assemblage
i
, and C
i
is
mean of these samples, calculate the sum Je of
squares of distance from individual sample point to
the assemblage centre. Then, iterations are used to
find out optimal assemblage results to minimize Je.
If assemblage of pattern samples is well distributed,
convergence is obtained after 3-5 times of iteration.

The number of centres is changed from small to large.
K mean method and least squares are used to
determine the network parameters, and F test is used
to determine the number of centres. One noteworthy
point is that although in BP networks, the larger the
number of nodes in hidden layer, the more accurate
the network, this is not always the case for the
number of centres in RBF networks.

For eqn.(1), let

H (k) = [1.0,( X(k) C
1
), ,
( X(k) C
nr
)]
T
(3)
(k) = [
0
,
1
, ,
nr
]
T

(4)

so that
fr( X(k)) = H (k)
T
(k)
(5)

The least squares method is used to identify the
network weights in eqn.(5). During on-line
identification, the recursive least squares method is
used to modify the network weights so that RBF
networks have good adaptability. If necessary, the
number and location of the centres can also be
modified.


3. IMPLEMENTATION OF RBF NETWORK
ALGORITHM

RBF algorithm procedure implemented with
Language C is listed below:

1) Initiation;

2) Input network parameters and training pattern set,
and normalization is used;

3) Use K mean method to determine assemblage
centres C
i
, i=1, ,nr
i. M samples are divided preliminary into nr
assemblages, and centres C
i
of every assemblage and
sum J
e
are calculated.
C
i
=
1
N
X
i X
i

(6)
J
e
=
X =

i
i
nr
1
( X C
i
2
) (7)

where N
i
represents the number of samples in i
th

pattern.
ii. For every sample X suppose that X is in
i

if N
i
= 1 next sample is calculated; otherwise,
calculating :

i
=
N
N
i
i
1
X C
i
2
(8)

j =
N
N
j
j
+ 1
X C
j
2
j

i (9)

For all j (j=1, k
L
,nr), if
k

j
X is removed
____________________________________________________________________________ www.paper.edu.cn

3
from
i
into
k
. C
i
and C
k
are recalculated , and
J
e
is modified.
iii. if J
e
is not changed, iteration is over.
Other-wise, return to step ii.
iv. F test is used to determine the number of
centres.

4) Construct H(k) according to eqn.(3), and recursive
least squares method is used to identify the network
parameters .

(k) = 1 / [ + H(k)
T
P(k-1) H(k)]
(10)
(k) = (k-1) + (k) P(k-1) H(k)
[ fr(X(k)) H (k)
T
(k)]
P (k) = [P(k-1) (k) [P(k-1) H(k)]
[P(k-1) H(k)]
T
]/


where represents the forgotten factor.

5) Results analysis and output.


4. USING RBF NEURAL NETWORK
TO ESTABLISH SELF-TUNING
MODEL OF A REACTOR

As artificial neural networks (ANN) have the
capabilities of approximating any function, it
provides a highly effective method of modeling and
state estimation of nonlinear process. Below, a
practical application of ANN in industrial prediction
is discussed. A on-line self-tuning model of
temperature for a propylene hydration reactor is
obtained.

The main factors which influence the reactor
temperature T consist of steam flow rate R
1
,
temperature R
2
of reactor inlet, system pressure R
3
,
concentration R
4
of propylene feed, molar ratio R
5
.
In order to establish the model of outlet temperature,
sample 314 groups of production data at field site in
March, 1994. When RBF neural networks are used to
establish the model, choose 100 groups of data to
build training set, and the rest of data as test set. To
compare with other modeling method, use BP
neural networks to obtain one model, and mechanism
analysis, orthogonal design and correlation analysis
are used to obtain the regression model of reactor
temperature (11)(Pan, et al., 1995; Xie, 1994).
T = 1.7301 0.6051R
1
+ 1.0309R
2
+ 6.0675R
3

+32.9618R
4
+ 15.2644R
5
0.3351/R
1

0.1336/R
5
0.1261/(lnR
4
)
3
11.3492R
5
1/2
0.00013R
2
3
4.8434R
5
3
(11)


Table 1 Comparison of precision of different
models

absolute
error
regression BP
network
RBF
network
training average 0.0759
0.4716

0.4108
maximum 0.4774
1.8202

1.4979
test average 0.6054
0.6055

0.3753
maximum 1.7897
2.4112

1.4538
time- average 0.9903
2.3258

0.5742
varying maximum 3.2617
5.5864

1.5503



Fig. 2 : Comparison of time-varying process and BP
network output



Fig.3 : Comparison of time-varying process and
regression model output


____________________________________________________________________________ www.paper.edu.cn

4

Fig. 4: Comparison of time-varying process and
RBF network output
It take 44 seconds on 486/DX33 computer to
establish temperature model using RBF neural
networks. Using BP neural networks with
construction of 5-5-3-1, it take 485 seconds to
reach error index err = 0.006 after 3000 learning. If
learning is furthered, the error of test groups will
increase even with the reduction of error for training
groups, leading to overfit to reduce the
generalization capability of networks. The error
index of models established by above three methods
is shown in Table 1. The results show that RBF and
BP networks can establish model easily and the
precision of these models is better than or at least the
same as that of regression model.

In June 1995, we sample 100 groups of production
data at field site again. As feed, operation condition
and other factors changes so that the temperature
range of the reactor had changed from 202~206 to
203~209 . Since the inputs and output of the model
go beyond the scope of training samples, BP neural
network model can not work normally (Fig.2); with
on-line tuning method, regression model can
basically reflect the practice of the reactor (Fig.3);
RBF neural network model has strong adaptability so
that it has the least absolutely errors and the maximal
precision.(Fig.4)


5. CONCLUSIONS

Therefore, the RBF neural networks have much
faster training speed than BP neural networks,
although they basically have the same capability of
approximating nonlinear function. It is more
important that RBF neural networks have
adaptability so that it can be used on-line
identification and estimation. Compared with
traditional mechanism model, the self-tuning model
of time-varying nonlinear system established by RBF
neural networks is not only simple, but also has
higher precision and adaptability. So, it has good
prospects of application in industrial process.



REFERENCES

Bian, Z. (1988). Pattern recognition. 220-230.
Tsinghua University Press, Beijing(China).
Chen, S. and S.A.,Billings, et al. (1990). Practical
identification of NARMAX models using
radial
basis functions. Int.J.Cont., 52(6),
pp.1327-1350.
Cybenko, G. (1989). Approximations by super-
positions of a sigmoidal function.
Mathematics
of Control, Signal and Systems, 2, 303-314.
Feng, C. and Liu, Y. (1994). Status quo and
problems of neural networks control. Control
theory and application(China), 11(1),
103-106.
Pan, L., Ma, J., Xie, C. (1995). Mathematical model
of a propylene hydration reactor. Journal of
Chemical Industry and Engineering (China),
46(2), 255-258.
Xie, C. (1994). On-line optimization of a propylene
hydration reactor. thesis of Beijing Univ. of
Chem. Tech , Beijing(China).



____________________________________________________________________________ www.paper.edu.cn

You might also like