Radial Basis Function Networks: Algorithms: Neural Computation: Lecture 14

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Radial Basis Function Networks: Algorithms

Neural Computation : Lecture 14


John A. Bullinaria, 2014

1.

The RBF Mapping

2.

The RBF Network Architecture

3.

Computational Power of RBF Networks

4.

Training an RBF Network

5.

Unsupervised Optimization of the Basis Functions

6.

Computing the Output Weights

7.

Supervised RBF Network Training

The Radial Basis Function (RBF) Mapping


We are working in the standard regression framework of function approximation, with a
set of N training data points in a D dimensional input space, such that each input vector
x p = {xip : i = 1,..., D} has a corresponding K dimensional target output t p = {tkp : k = 1,..., K}.
The target outputs will generally be generated by some underlying functions gk (x) plus
random noise. The goal is to approximate the gk (x) with functions yk (x) of the form
M

yk (x) = wkj j ( x)
j=0

We shall concentrate on the case of Gaussian basis functions

x
j
j (x) = exp
2
2

which have centres { j } and widths { j }. Naturally, the way to proceed is to develop a
process for finding the appropriate values for M, {wkj}, { ij} and { j }.
L14-2

The RBF Network Architecture


The RBF Mapping can be cast into a form that resembles a neural network:
1

outputs yk

weights wkj
1

M basis functions j(xi, ij, j)

weights ij
1

inputs xi

The hidden to output layer part operates like a standard feed-forward MLP network, with
the sum of the weighted hidden unit activations giving the output unit activations. The
hidden unit activations are given by the basis functions j (x, j , j ) , which depend on
the weights { ij , j } and input activations {xi} in a non-standard manner.
L14-3

Computational Power of RBF Networks


Intuitively, it is not difficult to understand why linear superpositions of localised basis
functions are capable of universal approximation. More formally:
Hartman, Keeler & Kowalski (1990, Neural Computation, vol. 2, pp. 210-215)
provided a formal proof of this property for networks with Gaussian basis functions
in which the widths { j } are treated as adjustable parameters.
Park & Sandberg (1991, Neural Computation, vol. 3, pp. 246-257; and 1993, Neural
Computation, vol. 5, pp. 305-316) showed that with only mild restrictions on the
basis functions, the universal function approximation property still holds.
As with the corresponding proofs for MLPs, these are existence proofs which rely on
the availability of an arbitrarily large number of hidden units (i.e. basis functions).
However, they do provide a theoretical foundation on which practical applications can
be based with confidence.
L14-4

Training RBF Networks


The proofs about computational power tell us what an RBF Network can do, but
nothing about how to find all its parameters/weights {M , wkj , j , j } .
Unlike in MLPs, the hidden and output layers in RBF networks operate in very different
ways, and the corresponding weights have
very different meanings and properties. It
is therefore appropriate to use different learning algorithms for them.
The input to hidden weights (i.e., basis function parameters { ij , j} ) can be trained
(or set) using any one of several possible unsupervised learning techniques.
Then, after the input to hidden weights are found, they are kept fixed while the hidden
to output weights are learned. This second stage of training only involves a single layer
of weights {w jk } and linear output activation functions, and we have already seen how
the necessary weights can be found very quickly and easily using simple matrix pseudoinversion, as in Single Layer Regression Networks or Extreme Learning Machines.
L14-5

Basis Function Optimization


One major advantage of RBF networks is the possibility of choosing suitable hidden
unit/basis function parameters without having to perform a full non-linear optimization
of the whole network. We shall now look at three ways of doing this:
1. Fixed centres selected at random
2. Clustering based approaches
3. Orthogonal Least Squares
These are all unsupervised techniques, which will be particularly useful in situations
where labelled data is in short supply, but there is plenty of unlabelled data (i.e. inputs
without output targets). Later we shall look at how one might try to get better results by
performing a full supervised non-linear optimization of the network instead.
With either approach, determining a good value for M remains a problem. It will
generally be appropriate to compare the results for a range of different values, following
the same kind of validation/cross validation methodology used for optimizing MLPs.
L14-6

Fixed Centres Selected At Random


The simplest and quickest approach to setting the RBF parameters is to have their
centres fixed at M points selected at random from the N data points, and to set all their
widths to be equal and fixed at an appropriate size for the distribution of data points.
Specifically, we can use normalised RBFs centred at { j } defined by
x
j
j (x) = exp
2
2 j

p
where { j } {x }

and the j are all related in the same way to the maximum or average distance between
the chosen centres j. Common choices are

j =

dmax
2M

or

j = 2dave

which ensure that the individual RBFs are neither too wide, nor too narrow, for the
given training data. For large training sets, this approach gives reasonable results.
L14-7

Example 1 : M = N, = 2dave

From: Neural Networks for Pattern Recognition, C. M. Bishop, Oxford University Press, 1995.
L14-8

Example 2 : M << N, << 2dave

From: Neural Networks for Pattern Recognition, C. M. Bishop, Oxford University Press, 1995.
L14-9

Example 3 : M << N, >> 2dave

From: Neural Networks for Pattern Recognition, C. M. Bishop, Oxford University Press, 1995.
L14-10

Example 4 : M << N, = 2dave

From: Neural Networks for Pattern Recognition, C. M. Bishop, Oxford University Press, 1995.
L14-11

Clustering Based Approaches


An obvious problem with picking the RBF centres to be a random subset of the data
points, is that the data points may not be evenly distributed throughout the input space,
and relying on randomly chosen points to be the best subset is a risky strategy.
Thus, rather than picking random data points, a better approach is to use a principled
clustering technique to find a set of RBF centres which more accurately reflect the
distribution of the data points. We shall consider two such approaches:
1. K-Means Clustering which we shall look at now.
2. Self Organizing Maps which we shall look at next week.
Both approaches identify subsets of neighbouring data points and use them to partition
the input space, and then an RBF centre can be placed at the centre of each partition/
cluster. Once the RBF centres have been determined in this way, each RBF width can
then be set according to the variances of the points in the corresponding cluster.
L14-12

K-Means Clustering Algorithm


The K-Means Clustering Algorithm starts by picking the number K of centres and
randomly assigning the data points {x p} to K subsets. It then uses a simple re-estimation
procedure to end up with a partition of the data points into K disjoint sub-sets or clusters
Sj containing Nj data points that minimizes the sum squared clustering function
K

J =

xp j

j =1 pS j

where j is the mean/centroid of the data points in set Sj given by


j =

1
Nj

pS j

It does that by iteratively finding the nearest mean j to each data point {x p},
reassigning the data points to the associated clusters Sj, and then recomputing the cluster
means j. The clustering process terminates when no more data points switch from one
cluster to another. Multiple runs can be carried out to find the lowest J.
L14-13

Orthogonal Least Squares


Another principled approach to selecting a sub-set of data points as the basis function
centres is based on the technique of orthogonal least squares.
This involves the sequential addition of new basis functions, each centred on one of the
data points. If we already have L such basis functions, there remain NL possibilities
for the next, and we can determine the output weights for each of those NL potential
networks. Then the L+1th basis function which leaves the smallest residual sum
squared output error is chosen, and we then go on to choose the next basis function.
This sounds wasteful, but if we construct a set of orthogonal vectors in the space S
spanned by the vectors of hidden unit activations for each pattern in the training set, we
can calculate directly which data point should be chosen as the next basis function at
each stage, and the output layer weights can be determined at the same time.
To get good generalization we can use validation/cross validation to stop the process
when an appropriate number of data points have been selected as centres.
L14-14

Dealing with the Output Layer


Since the hidden unit activations j (x, j , j ) are fixed while the output weights {w jk }
are determined, we essentially only have to find the weights that optimize a single layer
linear network. As with MLPs, we can define a sum-squared output error measure

E=

1
2

(t
p

p
k

yk (x ))
p

and here the outputs are a simple linear combination of the hidden unit activations, i.e.

yk (x ) = wkj j (x p )
j =0

At the minimum of E the gradients with respect to all the weights wki will be zero, so
M

E
p
p
= t k wkj j (x )i (x p ) = 0
wki
p
j=0

and linear equations like this are well known to be easy to solve analytically.

L14-15

Computing the Output Weights


The equations for the weights are most conveniently written in matrix form by defining
matrices with components (W)kj = wkj, ()pj = j(xp), and (T)pk = {tkp}. This gives

T (T W T ) = 0
and the formal solution for the weights is

W T = T

in which appears the standard pseudo-inverse of

defined as

( T)1 T

which can be seen to have the property = I. Thus the network weights can be
computed by fast linear matrix inversion techniques. In practice, it is normally best to
use Singular Value
Decomposition (SVD) techniques that can avoid problems due to
possible ill-conditioning of , i.e. T being singular or near singular.
L14-16

Supervised RBF Network Training


Supervised training of the basis function parameters will generally give better results
than unsupervised procedures, but the computational costs are usually enormous.
The obvious approach would be to perform gradient descent on a sum squared output
error function as we did for MLPs. That would give the error function:
M

E = (t yk (x )) = (t wkj j (x p , j , j ))2
p
k

p
k

j=0

and one could iteratively update the weights/basis function parameters using

w jk = w

E
w jk

ij =

E
ij

j =

E
j

We will have all the problems of choosing the learning rates , avoiding local minima
and so on, that we had for training MLPs by gradient descent. Also, there is a tendency

for the basis function widths to grow large leaving non-localised basis functions.
L14-17

Overview and Reading


1.

We began by defining Radial Basis Function (RBF) mappings and the


corresponding network architecture.

2.

Then we considered the computational power of RBF networks.

3.

We then saw how the two layers of network weights were rather different
and that different techniques were appropriate for training each of them.

4.

We looked at three different unsupervised techniques for optimizing the


basis functions, and then how the corresponding output weights could be
computed using fast linear matrix inversion techniques.

5.

We ended by looking at how supervised RBF training would work.

Reading
1.

Bishop: Sections 5.2, 5.3, 5.9, 5.10, 3.4

2.

Haykin-2009: Sections 5.4, 5.5, 5.6, 5.7


L14-18

You might also like