Radial Basis Functions Theory and Implementations
Radial Basis Functions Theory and Implementations
M. D. BUHMANN
University of Giessen
published by the press syndicate of the university of cambridge
The Pitt Building, Trumpington Street, Cambridge, United Kingdom
cambridge university press
The Edinburgh Building, Cambridge CB2 2RU, UK
40 West 20th Street, New York, NY 10011-4211, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
Ruiz de Alarcón 13, 28014 Madrid, Spain
Dock House, The Waterfront, Cape Town 8001, South Africa
http://www.cambridge.org
C Cambridge University Press 2003
A catalogue record for this book is available from the British Library
Preface page ix
1 Introduction 1
1.1 Radial basis functions 2
1.2 Applications 5
1.3 Contents of the book 8
2 Summary of methods and applications 11
2.1 Invertibility of interpolation matrices 11
2.2 Convergence analysis 16
2.3 Interpolation and convergence 23
2.4 Applications to PDEs 29
3 General methods for approximation and interpolation 36
3.1 Polynomial schemes 37
3.2 Piecewise polynomials 41
3.3 General nonpolynomial methods 45
4 Radial basis function approximation on infinite grids 48
4.1 Existence of interpolants 49
4.2 Convergence analysis 65
4.3 Numerical properties of the interpolation linear system 89
4.4 Convergence with respect to parameters in the radial functions 95
5 Radial basis functions on scattered data 99
5.1 Nonsingularity of interpolation matrices 100
5.2 Convergence analysis 105
5.3 Norm estimates and condition numbers of interpolation
matrices 136
vii
viii Contents
In the present age, when computers are applied almost anywhere in science,
engineering and, indeed, all around us in day-to-day life, it becomes more and
more important to implement mathematical functions for efficient evaluation in
computer programs. It is usually necessary for this purpose to use all kinds of
‘approximations’ of functions rather than their exact mathematical form. There
are various reasons why this is so. A simple one is that in many instances it
is not possible to implement the functions exactly, because, for instance, they
are only represented by an infinite expansion. Furthermore, the function we
want to use may not be completely known to us, or may be too expensive or
demanding of computer time and memory to compute in advance, which is
another typical, important reason why approximations are required. This is true
even in the face of ever increasing speed and computer memory availability,
given that additional memory and speed will always increase the demand of the
users and the size of the problems which are to be solved. Finally, the data that
define the function may have to be computed interactively or by a step-by-step
approach which again makes it suitable to compute approximations. With those
we can then pursue further computations, for instance, or further evaluations
that are required by the user, or display data or functions on a screen. Such cases
are absolutely standard in mathematical methods for modelling and analysing
functions; in this context, analysis can mean, e.g., looking for their stationary
points with standard optimisation codes such as quasi-Newton methods.
As we can see, the applications of general purpose methods for functional
approximations are manifold and important. One such class of methods will be
introduced and is the subject area of this book, and we are particularly interested
when the functions to be approximated (the approximands)
1
2 1. Introduction
The ‘radial basis function approach’ is especially well suited for those
cases.
choices are nonetheless possible and used in practice, and they can indeed be
very desirable such as least squares approximations or ‘quasi-interpolation’, a
variant of interpolation, where s still depends in a simple way on f ξ , ξ ∈ ,
while not necessarily matching each f ξ exactly. We will come back to this type
of approximation at many places in this book. We remark that if we know how to
approximate a function f : Rn → R we can always approximate a vector-valued
approximand, call it F: Rn → Rm , m > 1, componentwise.
From these general considerations, we now come back to our specific con-
cepts for the subject area of this monograph, namely, for radial basis function
approximations the approximants s are usually finite linear combinations of
translates of a radially symmetric basis function, say φ( · ), where · is the
Euclidean norm. Radial symmetry means that the value of the function only
depends on the Euclidean distance of the argument from the origin, and any
rotations thereof make no difference to the function value.
The translates are along the points ξ ∈ , whence we consider linear combi-
nations of φ( · −ξ ). So the data sites enter already at two places here, namely
as the points where we wish to match interpolant s and approximand f , and as
the vectors by which we translate our radial basis function. Those are called the
centres, and we observe that their choice makes the space S dependent on the
set . There are good reasons for formulating the approximants in this fashion
used in this monograph.
Indeed, it is a well-known fact that interpolation to arbitrary data in more than
one dimension can easily become a singular problem unless the linear space S
from which s stems depends on the set of points – or the have only very
restricted shapes. For any fixed, centre-independent space, there are some data
point distributions that cause singularity.
In fact, polynomial interpolation is the standard example where this problem
occurs and we will explain that in detail in Chapter 3. This is why radial basis
functions always define a space S ⊂ C(Rn ) which depends on . The simplest
example is, for a finite set of centres in Rn ,
(1.1) S= λξ · −ξ λξ ∈ R .
ξ ∈
Here the ‘radial basis function’ is simply φ(r ) = r , the radial symmetry stem-
ming from the Euclidean norm · , and we are shifting this norm in (1.1) by
the centres ξ .
More generally, radial basis function spaces are spanned by translates
φ( · −ξ ), ξ ∈ ,
4 1. Introduction
analysis will show, radial symmetry is not the most important property that
makes these functions such suitable choices for approximating smooth func-
tions as they are, but rather their smoothness and certain properties of their
Fourier transform. Nonetheless we bow to convention and speak of radial basis
functions even when we occasionally consider general n-variate φ: Rn → R
and their translates φ(· − ξ ) for the purpose of approximation. And, at any
rate, most of these basis functions that we encounter in theory and practice are
radial. This is because it helps in applications to consider genuinely radial ones,
as the composition with the Euclidean norm makes the approach technically in
many respects a univariate one; we will see more of this especially in Chapter 4.
Moreover, we shall at all places make a clear distinction between considering
general n-variate φ: Rn → R and radially symmetric φ( · ) and carefully state
whether we use one or the other in the following chapters.
Unlike high degree spline approximation with scattered data in more than
one dimension, and unlike the polynomial interpolation already mentioned, the
interpolation problem from the space (1.1) is always uniquely solvable for sets
of distinct data sites ξ , and this is also so for multiquadrics and Gaussians. For
multivariate polynomial spline spaces on nongridded data it is up to now not
even possible in general to find the exact dimension of the spline space! Thus
we may very well be unable to interpolate uniquely from that spline space.
Only several upper and lower bounds on the spatial dimension are available.
There exist radial basis functions φ of compact support, where there are some
restrictions so that the interpolation problem is nonsingular, but they are only
simple bounds on the dimension n of Rn from where the data sites come. We
will discuss those radial basis functions of compact support in Chapter 6 of
this book.
Further remarkable properties of radial basis functions that render them
highly efficient in practice are their easily adjustable smoothness and their
powerful convergence properties. To demonstrate both, consider the ubiquitous
multiquadric function which is infinitely often continuously differentiable for
c > 0 and only continuous for c = 0, since in the latter case φ(r ) = r and
1.2 Applications 5
1.2 Applications
Consequently, it is no longer a surprise that in many applications, radial basis
functions have been shown to be most useful. Purposes and applications of such
approximations and in particular of interpolation are manifold. As we have al-
ready remarked, there are many applications especially in the sciences and in
mathematics. They include, for example, mappings of two- or three-dimensional
images such as portraits or underwater sonar scans into other images for com-
parison. In this important application, interpolation comes into play because
some special features of an image may have to be preserved while others need
not be mapped exactly, thus enabling a comparison of some features that may
differ while at the same time retaining others. Such so-called ‘markers’ can be,
for example, certain points of the skeleton in an X-ray which has to be compared
with another one of the same person, taken at another time. The same structure
appears if we wish to compare sonar scans of a harbour at different times, the
rocks being suitable as markers this time. Thin-plate splines turned out to be
excellent for such very practical applications (Barrodale and Zala, 1999).
Measurements of potential or temperature on the earth’s surface at ‘scat-
tered’ meteorological stations or measurements on other multidimensional ob-
jects may give rise to interpolation problems that require the aforementioned
scattered data. Multiquadric approximations are performing well for this type
of use (Hardy, 1990).
Further, the so-called track data are data sites which are very close together on
nearly parallel lines, such as can occur, e.g., in measurements of sea temperature
with a boat that runs along lines parallel to the coast. So the step size of the
measurements is very small along the lines, but the lines may have a distance
of 100 times that step size or more. Many interpolation algorithms fail on
6 1. Introduction
such awkward distributions of data points, not so radial basis function (here
multiquadric) methods (Carlson and Foley, 1992).
The approximation to so-called learning situations by neural networks usu-
ally leads to very high-dimensional interpolation problems with scattered data.
Girosi (1992) mentions radial basis functions as a very suitable approach to
this, partly because of their availability in arbitrary dimensions and of their
smoothness.
A typical application is in fire detectors. An advanced type of fire detector
has to look at several measured parameters such as colour, spectrum, inten-
sity, movement of an observed object from which it must decide whether it is
looking at a fire in the room or not. There is a learning procedure before the
implementation of the device, where several prescribed situations (these are the
data) are tested and the values zero (no fire) and one (fire) are interpolated, so
that the device can ‘learn’ to interpolate between these standard situations for
general situations later when it is used in real life.
In another learning application, the data come from the raster of a screen
which shows the reading of a camera that serves as the eye of a robot. In this
application, it is immediately clear why we have a high-dimensional problem,
because each point on the square raster represents one parameter, which gives
a million points even on a relatively low resolution of 1000 by 1000. The data
come from showing objects to the robot which it should recognise as, for in-
stance, a wall it should not run into, or a robot friend, or its human master or
whatever. Each of these situations should be interpolated and from the inter-
polant the robot should then be able to recognise other, similar situations as
well. Invariances such as those objects which should be recognised indepen-
dently of angle etc. are also important in measurements of neural activity in the
brain, where researchers aim to recognise those activities of the nerve cells that
appear when someone is looking at an object and which are invariant of the an-
gle under which the object is looked at. This is currently an important research
area in neuro-physics (Eckhorn, 1999, Kremper, Schanze and Eckhorn 2002)
where radial basis functions appear often in the associated physics literature.
See the above paper by Eckhorn for a partial list.
The numerical solution of partial differential equations also enters into the
long list of mathematical applications of radial basis function approximation.
In the event, Pollandt (1997) used them to perform approximations needed in
a multidimensional boundary element method to solve nonlinear elliptic PDEs
on a domain , such as u = p (u(x), x), x ∈ ⊂ Rn , = 1, . . . , N ,
with Dirichlet boundary conditions u |∂ = q , when u = (u 1 , . . . , u N )T
are suitably smooth functions and p are multivariate polynomials. Here,
denotes the Laplace operator. The advantage of radial basis functions in this
1.2 Applications 7