0% found this document useful (0 votes)
66 views

Numerical Integration Using Sparse Grids

- The document discusses numerical integration methods for multivariate functions called sparse grids, which were introduced by Smolyak. These methods construct multivariate quadrature formulas using combinations of tensor products of 1D formulas. - It reviews several existing 1D integration rules that can be used as the basis for Smolyak's construction, including trapezoidal, Clenshaw-Curtis, and Gauss rules. It also introduces using extended Gauss (Patterson) rules which achieve the highest possible polynomial accuracy. - The performance of different variants of sparse grid quadrature formulas are compared on applications from physics and finance to demonstrate the approach.

Uploaded by

Thiago Nobre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views

Numerical Integration Using Sparse Grids

- The document discusses numerical integration methods for multivariate functions called sparse grids, which were introduced by Smolyak. These methods construct multivariate quadrature formulas using combinations of tensor products of 1D formulas. - It reviews several existing 1D integration rules that can be used as the basis for Smolyak's construction, including trapezoidal, Clenshaw-Curtis, and Gauss rules. It also introduces using extended Gauss (Patterson) rules which achieve the highest possible polynomial accuracy. - The performance of different variants of sparse grid quadrature formulas are compared on applications from physics and finance to demonstrate the approach.

Uploaded by

Thiago Nobre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Numerical Integration using Sparse Grids

Thomas Gerstner and Michael Griebel


Abteilung fur Wissenschaftliches Rechnen und Numerische Simulation
Institut fur Angewandte Mathematik
Universitat Bonn
Wegelerstr. 6, D{53115 Bonn

Abstract
We present and review algorithms for the numerical integration of mul-
tivariate functions de ned over d{dimensional cubes using several variants
of the sparse grid method rst introduced by Smolyak [51]. In this ap-
proach, multivariate quadrature formulas are constructed using combina-
tions of tensor products of suited one{dimensional formulas. The com-
puting cost is almost independent of the dimension of the problem if the
function under consideration has bounded mixed derivatives. We sug-
gest the usage of extended Gauss (Patterson) quadrature formulas as the
one{dimensional basis of the construction and show their superiority in
comparison to previously used sparse grid approaches based on the trape-
zoidal, Clenshaw{Curtis and Gauss rules in several numerical experiments
and applications.

Classi cation: 65C20, 65D30, 65D32, 65M99, 65R20, 65U05, 65Y20, 90A09.
Keywords: Multivariate numerical quadrature, sparse grids, Smolyak's con-
struction, complexity, curse of dimension.

1 Introduction
Multivariate integrals arise in many application elds, such as statistical me-
chanics, the valuation of nancial derivatives, the discretization of partial dif-
ferential and integral equations or the numerical computation of path integrals.
Conventional algorithms for the numerical computation of such integrals are
often limited by the "curse of dimension" meaning that the computing cost
grows exponentially with the dimension of the problem. Moreover, theoretical
complexity investigations reveal that also lower bounds for the computing cost
grow exponentially with the dimension for many integration problems [54].
However, for special function classes, such as spaces of functions which have
bounded mixed derivatives, Smolyak's construction [51] can overcome this curse
of dimension to a certain extent. In this approach, multivariate quadrature
formulas are constructed using combinations of tensor products of suited one{
dimensional formulas. In this way, the number of function evaluations and
the numerical accuracy get independent of the dimension of the problem up to
logarithmic factors.

1
Smolyak's construction is known under various names, such as (discrete)
blending method [22], Boolean method [11], or sparse grid method [58]. It has
been applied to numerical integration by several authors, using the midpoint
rule [2], the rectangle rule [41], the trapezoidal rule [3], the Clenshaw{Curtis
rule [8, 37, 38] and Gauss rules [39] as the one{dimensional basis integration
procedure. Further studies have been made concerning extrapolation methods
[3], discrepancy measures [15] and complexity questions [57].
There is a large variety of other methods for the numerical integration of
multivariate functions such as Monte Carlo and Quasi{Monte Carlo methods
[35], lattice rules [50], adaptive subdivision methods [18, 55] and approximation
methods based on neural networks [1, 32]. Each of these methods is particularly
suitable for functions from a certain function class and has a complexity which
is then also independent or nearly independent of the dimension of the problem.
Further applications of Smolyak's construction are the Fourier transforma-
tion [28], wavelet analysis [52], the solution of elliptic and hyperbolic partial
di erential equations [5, 23, 27, 58], integral equations [26, 46], eigenvalue prob-
lems [16], interpolation and approximation [11, 53], global optimization [36],
data compression [20] and image reconstruction [56].
The scope of this paper is to review several methods based on Smolyak's ap-
proach and to introduce additional constructions based on univariate extended
Gauss (Patterson) formulas which achieve the highest possible polynomial ex-
actness among all nested quadrature formulas which use the same number of
function evaluations. We also indicate some extensions and modi cations of
the method and show a numerically stable implementation. The performance
of several variants of sparse grid quadrature formulas is compared in a variety
of applications from computational physics and nancial mathematics.

2 Problem Formulation
In the following, boldface letters indicate vectors or multi{indices. We consider
the numerical integration of functions f (x) from a function class F over the
d{dimensional unit hypercube := [?1; 1]d,
Z
I d f := f (x) dx;
by a sequence of ndl {point quadrature formulas with level l 2 IN and ndl < ndl+1 ,
ndl
X
Qdl f := wli  f (xli ):
i=1
using the weights wli and abscissas xli. Furthermore, we de ne the underlying
grid of a quadrature formula by
?dl := fxli : 1  i  ndl g  [?1; 1]d:
Quadrature formulas are nested (imbedded) if the corresponding grids are
nested, that is
?dl  ?dl+1 :
2
In order to compare the performance of di erent quadrature formulas we
look at the quadrature error given by
Eldf := jI df ? Qdl f j:
Bounds for the quadrature error can be obtained by assuming certain smooth-
ness conditions for the function class F such as bounds on derivatives of func-
tions f 2 F .

3 Nested Univariate Quadrature Formulas


In the following, we give a short review of nested univariate quadrature formulas
for functions f 2 C r with
 @ sf 
C r := f : ?! IR; @xs < 1; s  r ;
1
which are used in conjunction with Smolyak's construction. As we will see
later, it is of great importance that n11 = 1 and n1l = O(2l ). Therefore, in the
following we always set
Q11 = 2  f (0):
3.1 Trapezoidal Rule
The Newton{Cotes formulas [9] use equidistant points and determine the cor-
responding weights by integration of the Lagrange polynomials through these
points. The closed versions include the endpoints of the interval, whereas the
open ones omit one or both of them. The formulas get numerically instable for
large numbers of points, i.e. some of the weights will become negative.
Therefore, iterated versions of low degree formulas are most commonly used
[2, 3, 41]. A well known example is the iterated trapezoidal rule. Here we use
n1l = 2l?1 + 1; l  2;
and therefore have
Xn1l
Ql = 00
1 22?l  f ((i ? 1)  22?l ? 1):
i=1
P
The 00 indicates that the rst and the last term of the sum are halved. The
error bounds are well known and for functions f 2 C 2 of the form
jEl1f j = O(2?2l):
For [?1; 1]{periodic functions f 2 C r , this bound improves to
jEl1f j = O(2?lr ):
Similar error bounds can be obtained for Simpson's rule and higher degree
formulas.
3
3.2 Clenshaw{Curtis Formulas
The Clenshaw{Curtis formulas [7] are numerically more stable and use the non{
equidistant abscissas given as the zeros or the extreme points of the Chebyshev
polynomials. The quadrature formulas are nested in case the extreme points
are used. We set for this case
n1l = 2l?1 + 1; l  2:
The abscissas are given by
xli = ? cos  n 1(i??11)
l
and the weights by
wl1 = wln1l = n1 (n11 ? 2) and
l l
0 (n1l ?1)=2 1
2 X 1 2 (i ? 1)j
wli = n1 ? 1 @1 + 2  0 1 ? 4j 2  cos n1 ? 1 A for 2  i  n1l ? 1:
l j =1 l
P
The 0 indicates that the last term of the sum is halved. The amount of work
for the computation of the weights can be reduced to order O(n1l log n1l ) using a
variant of the FFT algorithm [17]. The polynomial degree of exactness is n1l ? 1
and the error bounds for f 2 C r are therefore [9]
jEl1f j = O(2?lr ):
A variant of the Clenshaw{Curtis formulas are the Filippi formulas in which
the abscissas at the boundary of the interval are omitted. The number of points
is chosen as in the Gauss{Patterson case which is described below. The degree
of exactness is n1l ? 1 similar to the Clenshaw{Curtis formulas.
3.3 Gauss and Gauss{Patterson Formulas
Gauss formulas have the maximum possible polynomial degree of exactness of
2n ? 1. For the case of the unit weight function the abscissas are the zeroes
of the Legendre polynomials and the weights are computed by integrating the
associated Lagrange polynomials. However, these Gauss{Legendre formulas are
in general not nested.
Kronrod [31] extended an n{point Gauss quadrature formula by n +1 points
such that the polynomial degree of exactness of the resulting 2n + 1 formula is
maximal. This way, quadrature formulas with degree 2n + n + 2 with
(
n := n; if n odd :
n ? 1; else
are obtained. For the Gauss{Legendre formula, the new abscissas are real,
symmetric, inside the integration interval and interlace with the original points.

4
Furthermore, all weights are positive. It turned out [33], that the new abscissas
are the zeros of the Stieltjes polynomial Fn+1 satisfying
Z1
Pn (x)Fn+1 (x)xj dx = 0; for j = 0; 1; : : :; n;
?1
where Pn (x) is the n{th Legendre polynomial. Therefore, Fn+1 can be seen
as the orthogonal polynomial with respect to the weight function Pn (x) which
is of varying sign. The polynomial Fn+1 can be computed by expanding it in
terms of Legendre [43] or Chebyshev [47] polynomials and solving the resulting
linear system. The zeroes of Fn+1 can then be calculated by a modi ed Newton
method. Alternatively, the computation of the abscissas can be achieved by the
solution of a partial inverse eigenvalue problem [21].
Patterson [43] iterated Kronrod's scheme recursively and obtained a se-
quence of nested quadrature formulas with maximal degree of exactness. He
constructed a sequence of polynomials Gk (x) of degree 2k?1 (n + 1); k  1,
satisfying
Z1 kY
?1
Pn (x)( Gi(x))Gk (x)xj dx = 0 for j = 0; 1; : : :; 2k?1(n + 1) ? 1:
?1 i=1
This way, G1 (x) = Fn+1 (x) and the Gj are orthogonal to all polynomials of
degree Qless than 2k?1 (n + 1) with respect to the variable signed weight function
?1 G (x)). The 2k (n + 1) ? 1 abscissas of the resulting quadrature
Pn (x)( ji=1 i
formulas are the zeroes of Pn and all Gj ; 1  j < k. The abscissas and weights
can be computed similar to the Kronrod case. This way, formulas of degree
(3  2k?1 ? 1)(n + 1) + n can be obtained { at least in theory.
However, Patterson extensions do not exist for all Gauss{Legendre formulas.
For example, in the case of the 2{point Gauss{Legendre formula, only four
extensions are possible [45]. But, starting with the 3{point formula, extensions
exist for practicable k and all properties of Kronrod's scheme are preserved.
We set Q12 equal to the 3{point Gauss{Legendre formula, and Q1l ; l  3,
equal to its (l ? 2)nd Patterson extension. This way, n1l = 2l ? 1 and the
polynomial degree of exactness is 3  2l?1 ? 1 for l  2. The error is therefore
for f 2 C r again
jEl1f j = O(2?lr ):
3.4 Comparison
Of the considered nested quadrature formulas (with the restriction to periodic
functions in the case of the trapezoidal rule) all achieve the optimal order of
accuracy O(2?lr ). Among these, the Gauss{Patterson formulas achieve the
highest possible polynomial exactness of nearly (3=2)n1l compared to nl1 ? 1 for
the Clenshaw{Curtis and Filippi formulas and 1 for the trapezoidal rule. From
the results in [4] also follows that the Peano constants are smaller in comparison
to the other formulas considered.
However, the existence of Patterson extensions is at the time not clear for
large k, i.e. k > 5. Still, for Smolyak's construction the existing Patterson
formulas are sucient for moderate and high dimensional problems.
5
Note, that although the order of n1l is the same in all cases, the actual
number of points in the trapezoidal and Clenshaw{Curtis formulas compared
to the Filippi, Gauss{Legendre and Patterson formulas can di er by almost a
factor of 2 for the same level l.

4 Smolyak's Construction
Let us now consider the multivariate case. Smolyak [51] proposed a construction
of multivariate quadrature formulas on the basis of one{dimensional quadrature
formulas in a tensor product setting. The function classes he considered include
functions with bounded mixed derivatives of order r, that is,
( jsj1 f )
Wdr := f : ?! IR; @xs1@: : :@xsd < 1; si  r ;
1 d 1
with jsj1 := s1 + : : :+ sd . These spaces correspond to partially separable function
spaces [38], and, in the case f is [?1; 1]d{periodic, to Korobov spaces [53]
Edr := f : ?! IR; a(m1; : : :; md) = O(jm 1  : : :  m dj?r ) ;
with r > 1, m  j := maxf1; mj g and a(m1 ; : : :; md ) being the Fourier coecients
of the series
X
1
f (x) = a(m1; : : :; md)  e?2i(m1 x1 +:::+mdxd) :
m1 ;:::;md =?1

4.1 Algorithm
First, consider a sequence of (not necessarily nested) one{dimensional quadra-
ture formulas for a univariate function f ,
n1l
X
Q1l f := wli  f (xli ):
i=1
Now, de ne the di erence quadrature formula by
1k f := (Q1k ? Q1k?1 )f with
Q10 f := 0:
In general, the di erence formulas are therefore quadrature formulas on the
union of the grids ?1k [ ?1k?1 (which is ?1k in the nested case). Smolyak's con-
struction for d{dimensional functions f is then for l 2 IN and k 2 INd
X
Qdl f := (1k1 : : : 1kd )f:
jkj1 l+d?1
The tensor product of d quadrature formulas (Q1l1 : : : Q1ld ) is hereby de ned
as the sum over all possible combinations
1
nl1 1
nld
X X
(Q1l1 : : : Q1ld )f := ::: wl1 i1  : : :  wldid  f (xl1 i1 ; : : :; xldid ):
i1 =1 id =1

6
Note, that a simple product formula is characterized by
X
d X
(Q1l : : : Q1l )f = (1k1 : : : 1kd )f;
j =1 1kj l
and corresponds to a summation over the cube jkj1  l with jkj1 := maxfkj g
instead of the simplex jkj1  l + d ? 1.
Alternatively, Smolyak's formula can be written in terms of Q1kj instead of
1kj [10],
X !
Qdlf = (?1)l+d?jkj1 ?1  jkdj??1 l  (Q1k1 : : : Q1kd )f;
ljkj1 l+d?1 1

which is also known as combination technique [25]. Of further importance are


the dimension recursive versions [41, 57]
X
l?1
Qdl f = (1k Qdl??k1 )f;
k=1
and X
Qdl+1
+1 f = (1k1 : : : 1kd Q1l+d?jkj1 )f:
jkj1 l+d?1
These two formulations are frequently used to prove the properties of Smolyak's
construction presented in the following sections.
4.2 Sparse grids
In a straightforward implementation, Smolyak's construction as well as the
combination technique require multiple function evaluations at some abscissas.
If function evaluations are costly, e.g. themselves integrals (see for example
Section 6.5), it is necessary to modify the algorithm. For the case that the
one{dimensional quadrature formulas Q1l f are nested, we consider the one{
dimensional di erence grids
1l := ?1l n ?1l?1 ;
with ?1l being the grids of the one{dimensional formulas and ?10 := ;. We
denote the elements in 1l by yli , 1  i  ml . In the non{nested case, we set
1l := ?1l ;
yli := xli and ml = nl . The points of the multivariate Smolyak formula then
form a so{called sparse grid [58] given by the union over the pairwise disjoint
grids 1k1  : : :  1kd ,
]
?dl = 1k1  : : :  1kd :
jkj1 l+d?1

7
Figure 1: Sparse grids corresponding to the trapezoidal, Clenshaw{Curtis,
Gauss{Patterson, and Gauss{Legendre rules for d = 2; l = 6.

Examples for sparse grids are shown in Figure 1. We have also plotted a
sparse grid corresponding to the non{nested Gauss{Legendre formula in which
the number of abscissas of the one{dimensional formulas is the same as in the
Gauss{Patterson case. Note again, that the number of abscissas of the Gauss
and Gauss{Patterson formulas is approximately twice the number of abscissas
of the trapezoidal and Clenshaw{Curtis formulas.
As an important consequence, the multivariate formulas are nested, if the
one{dimensional formulas are [37], that is,
?dl  ?dl+1 :
The number of points in a sparse grid can be determined as
X
ndl = mk1  : : :  mkd :
jkj1 l+d?1
If n1l = O(2l) the order of ndl is (see [39])
ndl = O(2l  ld?1 ):
This property is in contrast to product rules in which the number of grid points
is of order O(2ld). In order to reduce the increase of the number of grid points
in high dimensional problems it is especially important to have n11 = 1. Note,
that the order of ndl is the same in the nested as well as in the non{nested case,
but the constants are considerably larger in the non{nested case if the same
number of abscissas for the univariate quadrature formulas are used.
4.3 Computation of the weights
Smolyak's algorithm can now be written as
X m
Xk1 m
Xkd
Qdlf := ::: wkj  f (xkj):
jkj1 l+d?1 j1 =1 jd =1
with xkj := (xk1 j1 ; : : :; xkd jd ). In the nested case the weights are given by
X
wkj = v(k1 +q1 )j1  : : :  v(kd+qd )jd ;
jk+qj1 l+2d?1

8
with q 2 INd and
(
v(k+q)j := wwkj if q = 1
(k+q?1)r ? w(k+q?2)s if q > 1; xkj = x(k+q?1)r = x(k+q?2)s :
Explicit formulas for the weights wkj exist only in special cases, such as the
rectangle rule [41]. For the non{nested case, we have directly
wkj = wk1 j1  : : :  wkdjd :
The weights can be precomputed in both cases, so there is no practical di erence
concerning the overall cost of the quadrature formulas.
Smolyak quadrature formulas can contain negative weights even if the under-
lying univariate quadrature formulas are positive. Convergence is guaranteed
however, because the absolute values of the weights remain relatively small. It
can be shown [38] that as well in the nested as in the non{nested case
m1k1 m1kd
X X X
::: jwkjj = O((log(ndl))d?1):
jkj1 l+d?1 j1 =1 jd =1
Nevertheless, due to the existence of negative weights, it is especially important
to avoid numerical cancellation. Instead of a summation with increasing l, that
is,
m1k1 m1kd
Xl X X X
Qdlf := ::: wkj  f (xkj);
m=1 jkj1 =m+d?1 j1 =1 jd =1
we recommend to sum up coordinate{wise, i.e.
:::?kd?1 m
l?k1 ?X 1 m1kd
Xl lX
?k1 Xk1 X
Qdl f = ::: ::: wkj  f (xkj ):
k1 =1 k2 =1 kd =1 j1 =1 jd =1
Numerical experiments show that rounding errors are much smaller in this case.
Similarly, in the nested case the weights wkj should be computed with the same
strategy.
4.4 Error Bounds
We rst look at the polynomial degree of exactness of a Smolyak quadrature
formula. Let Pl1 be the space of one{dimensional polynomials of degree  l. In
the multivariate case we consider the polynomial spaces
Pld := fPk11 : : : Pk1d ; jkj1 = l + d ? 1g:
If Q1l is exact for Pl1 , then Qdl is exact for Pld [37], that is
Eldf = 0 for all f 2 Pld:

9
The classical polynomial exactness (based on the spaces Pk11 : : : Pk1d ; jkj1= l),
however, behaves for interpolatory quadrature formulas in Smolyak's construc-
tion for l < d only linearly like 2l ? 1 [8, 39].
A quadrature formula Q1l is called symmetric, if for x 2 ?1l also ?x 2 ?1l
and the weights for these abscissas are the same. If all Q1l are symmetric, then
Qdl is exact for all Pk1 : : : Pkd with at least one kj odd [37].
In order to formulate error bounds for Smolyak's formula, we start with the
error bound for one{dimensional quadrature formulas for functions f 2 C r ,
jEl1f j = O((n1l )?r ):
This bound holds for example for all interpolatory quadrature formulas with
positive weights, such as the Clenshaw{Curtis, Gauss{Patterson and Gauss{
Legendre formulas. Taking one such quadrature formula as one{dimensional
basis, if f 2 Wdr and n1l = O(2l ), the error of Smolyak's quadrature formula is
of order [57]
jEldf j = O(2?lr  l(d?1)(r+1)):
Similar results exist for Korobov spaces Edr [2]. For the classical spaces Cdr error
bounds can also be derived [38], but they indicate an exponential dependence
on the dimension.

5 Extensions
In this section, we consider speed{up techniques and strategies for computing
singular integrands.
5.1 Extrapolation
Extrapolation methods use a linear combination of nested quadrature formulas
to construct a quadrature formula which has lower order error terms eliminated.
We will consider here only the extrapolation of the iterated trapezoidal rule.
In the univariate case, the coecients clj are determined such that the
extrapolated quadrature formula 1l
Xl
1l f = clj  Q1j f:
j =1
is of higher order. In the case of the iterated trapezoidal rule, the Euler{
McLaurin summation formula yields the coecients
Yl 4 :
clj =
=1;6=j 4 ? 4
 j

Since the iterated2 trapezoidal rule is of order O(2?2l), the extrapolated version
is of order O(2?2l ) for f 2 C 2l. Therefore, by replacing r by 2l analogous error
bounds as for the Gauss and Clenshaw{Curtis formulas can be obtained. In

10
the multivariate case, it is possible to apply the combination technique to get
the extrapolated quadrature formula [3]
X !
Qdl f (?1)jkj1 ?1  d?1 1 1
:=
ljkj1 l+d?1 l ? jkj1 (k1 : : : kd )f:
The error bounds for the extrapolated quadrature formula can be obtained from
section 4.4.
5.2 Adaptivity
Adaptive quadrature formulas spend more points in areas where f is not smooth
and less points where f is smooth. In principle, it is possible to consider two
kinds of adaptivity strategies. The rst approach uses a{priori knowledge of
the smoothness of the integrand, while in the second approach the algorithm
adapts itself automatically during the quadrature process. Examples of such
strategies for Smolyak's construction have been presented in [3].
If it is known that the smoothness of the integrand depends on the direction,
it is possible to apply di erent univariate quadrature formulas in each direc-
tion. This way, less costly quadrature formulas (even of di erent type) can be
applied in smoother directions and therefore the overall cost is reduced. This
is especially important when dealing with high{dimensional integrals. It is also
possible to use grid{adapted univariate quadrature formulas, for example if the
integrand has singular behaviour in certain directions.
Alternatively, it is possible to modify Smolyak's original construction and
de ne non{isotropic formulas [41] of the form
X
Qdl f := (1k1 : : : 1kd )f;
jkjv l+d?1
with
X
d
jkjv := vj  kj ; vj > 0:
j =1
This corresponds to a weighting of the directions and therefore small values for
vj can be used in smoother directions.
This construction is in fact a special case of a wider class of quadrature
formulas which can be generated by the following generalization of Smolyak's
construction. We denote the j {th unit vector as ej and de ne index sets Il
such that for all k 2 Il ,
k ? ej 2 Il for 1  j  d; kj > 1;
holds. This way, a general Smolyak quadrature formula is de ned by
X
Qdlf := (1k1 : : : 1kd )f:
k2Il

11
To describe the general combination technique formally, we de ne the charac-
teristic function Il of Il as
(
Il (k) = 1 if k 2 Il ;
0 else;
and have
0 1 1
X @X X
1
Qdl f = ::: (?1)jzj1  Il (k + z)A (Q1k1 : : : Q1kd )f:
k2Il z1 =0 zd =0
An appropriate choice of Il can reduce the computing cost signi cantly for
special integrands especially in high{dimensional problems, as can be seen in
Section 6.6. The index sets Il can be obtained a{priori or by a self{adaptive
procedure.
Adaptive schemes during the quadrature process are based on an error esti-
mator which re nes regions according to a partitioning scheme [18, 55]. Nested
quadrature formulas allow a simple global error estimator based on the di er-
ence of two subsequent quadrature formulas, i.e. for l  2,
jEldf j  j(Qdl ? Qdl?1)f j:
Re nement strategies can then be obtained by locally applying error estimators
of above form. However note that self{adaptive strategies for high{dimensional
integrals are very costly and involved.
5.3 General Weight Functions
We consider the weighted integral
Z
I d f := w(x)f (x) dx;
where
w(x) := w1 (x1)  : : :  wd (xd):
In this case, Smolyak's construction can then be applied directly using appropri-
ate one{dimensional quadrature formulas corresponding to the weight functions
w1; : : :; wd.
For general weight functions, the theory of nested univariate quadrature
formulas is far less developed. Patterson [44] constructs extended Gaussian
quadrature formulas for general constant signed weight functions. The compu-
tation of the abscissas is done on the basis of the associated 3{term recurrence
relation for the orthogonal polynomials corresponding to the weight function.
However, the existence of real abscissas and the interlacing property is not
always guaranteed, and is heavily dependent on the weight function. For Her-
mite and Laguerre weight functions, even the rst (Kronrod) extension is known
only for a few special cases. On the other hand, extensions for Gauss{Chebyshev
formulas (of the rst and second kind) seem to exist for all n and even have

12
the same or nearly the same degree of exactness as the corresponding Gauss
formulas.
One way for the derivation of extensions in the cases where the existence
of Patterson extensions is not guaranteed are suboptimal extensions [45, 49].
These formulas aim at a smaller degree of exactness than the highest possible
but try to construct quadrature formulas with real and interlacing abscissas.

6 Numerical Examples
In the following, we consider several multivariate integration problems arising
in various applications which can be solved using the sparse grid method. The
dimension of these integrals varies from low (3-6) up to theoretically in nite. In
order to show the performance of the methods presented, we chose in all cases
except for the CMO problem examples where the exact value of the integral
is known. Several examples are taken from Moroko and Ca isch [34] which
allows the comparison with Quasi{Monte Carlo methods.
6.1 Test Functions
First, we consider some test integrals which have a rather simple structure.
Although these functions might not be of great use in real applications, they
still allow some insight in the properties of sparse grid quadrature formulas.
Previous comparisons with other multivariate quadrature formulas [3, 37, 38, 39]
showed the superiority of sparse grid quadrature formulas for smooth functions.
So here we compare only the performance of sparse grid formulas with di erent
univariate bases. Let us consider the following test integral taken from [34],
which is an example for a function whose variation grows exponentially,
Z Yd
f (x) = (1 + 1=d)d (xi)1=d dx:
[0;1] d
i=1

Trapez Clenshaw Patterson Gauss


l Fcalls Error Fcalls Error Fcalls Error Fcalls Error
1 1 2.44e-01 1 2.44e-01 1 2.44e-01 1 2.44e-01
2 11 1.08e-00 11 6.38e-01 11 8.94e-03 11 8.94e-03
3 61 7.58e-02 61 1.44e-01 71 8.07e-04 81 8.38e-04
4 241 2.86e-01 231 1.24e-01 351 2.07e-04 471 8.74e-05
5 801 1.08e-01 801 6.65e-03 1471 2.26e-05 2341 7.57e-06
6 2433 8.00e-02 2433 1.06e-02 5503 1.42e-06 10363 9.38e-08
7 6993 5.03e-02 6993 1.74e-03 18943 3.44e-09 41913 1.94e-07
0.22 0.61 1.63 1.54
Table 1: Computational results for the test integral
The exact value of this integral is 1. In Table 1 we compare the numerical
results for Smolyak quadrature formulas with the trapezoidal rule (trapez), the
Clenshaw{Curtis formulas (clenshaw), the Gauss{Patterson formulas (patter-
son), and the Gauss{Legendre (gauss) formulas as univariate basis integration

13
routines for dimension d = 5. At the bottom of this and the following tables
we denote the computed exponential factor for the error
Eldf = c  N ?
obtained by a least squares t of the data.
We see that the Patterson formula performs best considering the ratio of
error to function calls. The tted convergence rate for a Quasi{Monte{Carlo
method using a Sobol sequence is for this example only  0:5 [34]. The
Clenshaw{Curtis and trapezoidal rules perform particularly bad probably be-
cause they require function evaluations in the origin where the singularity is
located.
Further tests of Smolyak's construction based on various univariate quadra-
ture formulas using the Genz testing package have been made in [3, 8, 37].
In summary, the results show that the Filippi formulas perform in general
worse than the Patterson formulas and the trapezoidal and extrapolated trape-
zoidal rules usually perform worst. The superiority of Patterson formulas over
Clenshaw{Curtis formulas in Smolyak's construction decreases with rising di-
mension d. This can be accounted to the fact that the Clenshaw{Curtis formula
requires less function evaluations for the same classical polynomial exactness
for l < d.
6.2 Integral Equation
The discretization of multivariate partial di erential or integral equations us-
ing nite element or boundary element methods often requires the evaluation
of multivariate integrals. In case the exact value of these integrals is too ex-
pensive to compute, numerical algorithms are commonly used. We consider the
Dirichlet screen (square plate) problem
u = 0 in IR3 n 
u = f on
with = [0; 1] and u = O( jzj ) for jzj ! 1. This problem can be transformed
2 1
into an integral equation using the single layer potential V , i.e.
Z
V (x) := 41 jx ?1 yj (y) dsy = f (x); x 2 :
The boundary element Galerkin approach using a nite{dimensional approxi-
mating subspace Xh of the appropriate Sobolev space leads to the linear system
(V h ; h ) = (fh ; h )
with h ; h 2 Xh . We consider for Xh the space of piecewise bilinear functions
on an equidistant grid h with h = 2?l and use a nodal (Lagrange) basis fh g.
The entries of the n2  n2 ; n = 2l + 1 matrix A = (ab;c) := (V h ; h ); b; c =
(0; 0); : : :; (n; n) are then given by
Z (b1+1)h Z (b2+1)h Z (c1+1)h Z (c2+1)h b(x)c (y)
b;ca = h h dy dx;
(b1 ?1)h (b2 ?1)h (c1 ?1)h (c2 ?1)h jx ? yj
14
where (
bh (x) = 'h (x ? hb) for x ? hb 2
0 else
and
'h (x) = maxf(1 ? jx1 j=h)  (1 ? jx2j=h); 0g:
The exact value of these integrals can be computed using the computationally
rather expensive recursion formulas presented in [29]. For the numerical com-
putation each integral is subdivided along x = b and y = c into the 16 parts
where the integrand is di erentiable. Away from the singularity it is possible
to apply Smolyak's construction directly. For the computation of the singular
integrals the above mentioned recursion formulas or the Du y transformation
[12] can be used.
In Table 2 we compare the performance of several Smolyak quadrature for-
mulas using di erent one{dimensional basis integration routines for one selected
matrix entry. We show the computational results for h = 321 , a = (0; 0)T and
b = (0; 3)T since this is one of the least smooth non{singular integrals.
Trapez Clenshaw Patterson Gauss
l Fcalls Error Fcalls Error Fcalls Error Fcalls Error
1 1 7.28e-03 1 7.28e-03 1 7.28e-03 1 7.28e-03
2 9 4.71e-03 9 7.15e-04 9 2.00e-04 9 2.00e-04
3 41 1.51e-03 41 2.41e-06 49 4.63e-05 57 4.63e-05
4 137 4.80e-04 137 1.67e-05 209 3.85e-06 289 3.85e-06
5 401 5.43e-05 401 4.42e-06 769 2.90e-07 1265 2.90e-07
6 1105 2.30e-05 1105 5.57e-07 2561 8.52e-09 4969 8.52e-09
7 2929 6.00e-06 2929 5.18e-07 7973 8.55e-11 17945 8.55e-11
0.94 1.21 1.97 1.80
Table 2: Computational results for the integral equation
We see that the errors are the same for the Gauss{Legendre and Gauss{
Patterson formulas. In fact they are not identical but di er in the fourth and
following digits. This can be accounted to the fact that polynomials in the
range for which the Gauss{Legendre formula is exact and the Gauss{Patterson
formula is not are integrated by the Gauss{Patterson formula with about this
accuracy. However, the Gauss{Patterson formulas require much less function
evaluations and therefore have the best rate of convergence. The trapezoidal
and Clenshaw{Curtis rules require still less function evaluations but also exhibit
a much larger error.
Once all matrix entries have been computed the linear system can be solved
using a preconditioned conjugate gradient iteration. This yields the solution
shown in Figure 2.
Further integral equations arise in computer graphics, examples are the ra-
diosity and volume rendering equations. There are also higher dimensional
generalizations, for example volume integral equations, which require the eval-
uation of six dimensional integrals.

15
Figure 2: Numerically computed screen (h = 321 )

6.3 Boltzmann Collision Integral


The gain in the distribution function for the density of rare ed gas can be
described by an integral over velocity w and the two collision angles  and ".
We consider here a special case, where the exact value of the integral is known
[34]. We consider the ve{dimensional integral
1 Z Z 2 Z 
I df = 3 3 jwj2  sin   eh(u;w;;")  e?jwj22 dd"dw;
IR 0 0
q
where w23 = w22 + w32 and
h(u; w; ; ") = ?2u2 + u(w1 + w2 + (w2 ? w1) cos ) 
+u  sin  w23  sin " + jwj2w3 cos " + w1 w2 sin "
:
w 23
The exact value of this integral is given by the series
X
1 !
4
I d f =   e?2u 2
ck  u2k ;
k=0
with
ck = 1  3 3: : :  ?(2k1 + 1) :
k+1

The integral over IR3 can be transformed into an integral over the 22
unit cube
using the inverse distribution function thus eliminating the e ?jw j term. The
computational results for u = 0:25 are shown in Table 3.
Patterson Gauss
l Fcalls Error Fcalls Error
1 1 1.00e+00 1 1.00e+00
2 11 9.74e-01 11 9.74e-01
3 71 2.09e-01 81 2.10e-01
4 351 9.28e-03 471 9.29e-03
5 1471 8.70e-05 2341 8.69e-05
1.23 1.16
Table 3: Computational results for the Boltzmann collision integral

16
The trapezoidal and the Clenshaw{Curtis formulas cannot be used in this
case since they require function evaluations on the boundary of the interval. Of
course, there are transformations which can overcome this drawback, but they
usually change the smoothness class of the integrand. In case of the Clenshaw{
Curtis formula it is possible to use Chebyshev knots of the second kind for the
appropriate weight function.
Again, the Patterson and Gauss formulas show similar error behaviour while
the Patterson formulas are computationally much cheaper. The tted conver-
gence rates are relatively small in both cases, but about twice as high as
those of Quasi{Monte Carlo methods which lie in the range of 0:61{0:65 for
this example [34].
6.4 Absorption Problem
We consider the numerical computation of a transport problem given by the
integral equation Z1
y(x) = x +  y(z) dz;
x
which describes the behaviour of a particle traveling through a one{dimensional
slab of unit length. In each step, the particle travels forward a random distance
uniform between 0 and 1. If it does not leave the slab this way, it may be
absorbed with probability 1 ? . The solution of this problem is given by
y(x) = 1 ? 1 ?  e (1?x) :
This solution can also be represented by the in nite{dimensional integral
Z X
1
y(x) = Fn (x; z) dz;
[0;1]1 n=0
with 0 1 0n+1 1
X
n X
Fn (x; z) = n   @(1 ? x) ? zj A   @ zj ? (1 ? x)A ;
j =1 j =1
where  is the Heaviside function
(
(s) = 1 for s  0
0 for s < 0 :
Since it is rather unlikely that the particle can travel more than a few steps
before being absorbed or leaving the slab, the in nite{dimensional integral may
be approximated by truncation at a nite dimension d, i.e.
Z dX
?1
y~(x) = Fn(x; z) dz:
[0;1]d n=0

17
However, in this formulation the integrand is discontinuous, which has a neg-
ative e ect on the performance of sparse grid quadrature formulas. Alterna-
tively, the same solution can be obtained by replacing Fn by Fn0 in the in nite{
dimensional integral where
0n?1 1 0 1
Y Y
n
Fn0 (x; z) = n(1 ? x)n @ (zj )n?j A  @1 ? (1 ? x)  zj A :
j =1 j =1
Afterwards, an analogous truncation at nite dimension d can be performed.
In this formulation, the integrand is smooth. The numerical results for y~(0)
and d = 8 are shown in Table 4.
Trapez Clenshaw Patterson Gauss
l Fcalls Error Fcalls Error Fcalls Error Fcalls Error
1 1 2.02e-02 1 2.02e-02 1 2.02e-02 1 2.02e-02
2 17 8.12e-03 17 1.32e-03 17 1.33e-03 17 1.33e-03
3 145 3.86e-03 145 7.18e-05 161 9.19e-05 177 9.19e-05
4 849 9.74e-04 849 2.68e-06 1121 6.13e-06 1409 6.13e-06
5 3937 1.37e-04 3937 1.15e-06 6401 3.82e-07 9377 3.82e-07
6 15713 7.36e-06 15713 9.70e-08 31745 1.40e-08 54673 1.40e-08
0.76 1.28 1.35 1.29
Table 4: Computational results for the absorption problem
The errors for the Gauss and Patterson formulas are again nearly the same.
However, in this example the Clenshaw{Curtis formulas perform almost as good
as the Gauss formulas. As already mentioned this can be accounted to the
dependence of the classical polynomial exactness on the dimension.
6.5 Path Integral
We consider the initial value problem given by the linear parabolic di erential
equation
@u = 1  @ 2u (x; t) + v(x; t)  u(x; t);
@t 2 @x2
with initial condition u(x; 0) = f (x). The solution of this problem is given by
the Feynman{Kac formula [30] as
 Rt 
u(x; t) = Ex;0 f ((t))  e 0 v((r);t?r) dr ;
where  represents a Wiener path starting at  (0) = x. The expectation Ex;0
can be approximated by discretizing time in a nite number of time steps ti =
i  t and by approximating the integral in the exponent by a one{dimensional
quadrature formula such as the trapezoidal rule for each time step. We consider
the example
1 1 4 x 2 !
v(x; t) = t + 1 + x2 + 1 ? (x2 + 1)2 ;

18
with initial condition u(x; 0) = x21+1 . The exact solution of this example is then

u(x; t) = xt2++11 :
The results for t = 0:02 and x = 0 are given in Table 5. Increasing the level
and dimension separately improves the accuracy only to a certain extent, see
Figure 3. This is another type of the curse of dimension we encounter here.
In order to gain accuracy, it is necessary to increase both the level and the
dimension of the quadrature formula. For d = 10 and l = 0 : : : 5 the tted
rate of convergence is about = 0:93. This rate can be improved by using a
Brownian bridge discretization in the dimension, which is explained in the next
example.
d=4 d=6 d=8 d = 10
l Fcalls Error Fcalls Error Fcalls Error Fcalls Error
1 1 2.06e-02 1 2.06e-02 1 2.06e-02 1 2.06e-02
2 9 2.90e-03 13 2.86e-03 17 2.83e-03 21 2.82e-03
3 49 3.41e-04 97 3.29e-04 161 3.24e-04 241 3.21e-04
4 209 3.50e-05 545 3.22e-05 1121 3.11e-05 2001 3.05e-05
5 769 5.33e-06 2561 3.80e-06 6401 3.26e-06 13441 3.00e-06
6 2561 2.81e-06 10625 1.38e-06 31745 8.82e-07 77505 6.50e-07
Table 5: Computational results for the path integral
0.1
dim=2
0.01 dim=4
dim=6
dim=8
0.001 dim=10

0.0001
Error

1e-05

1e-06

1e-07
1 10 100 1000 10000 100000
Function calls

Figure 3: Graphical representation of the results in Table 5

6.6 CMO Problem


A typical collateralized mortgage obligation problem consists of several tranches
which derive their cash ows from an underlying pool of mortgages [6, 42]. The
problem is to estimate the expected value of the sum of present values of future
cash ows for each of the tranches. If the pool of mortgages has a thirty{year
maturity and cash ows are obtained monthly, this results in the evaluation of
an integral which is of dimension d = 360 for each tranche, i.e.
Z
PV := v(1; : : :; d)  g(1)  : : :  g(d) dd : : :d1
IRd

19
with Gaussian weights g (i) = (2 2)?1=2e?i2 =22 . The present value v is
de ned by
X
d
v(1; : : :; d) := uk mk
k=1
with
kY
?1
uk := (1 + ij )?1 ;
j =0
mk := crk ((1 ? wk) + wk ck );
kY ?1
rk := (1 ? wj );
j =1
dX ?k
ck := (1 + i0 )?j ;
j =0
ik := K0k e1 +:::+k i0;
wk := K1 + K2 arctan(K3ik + K4):
The variables uk , mk , and ik are the discount factor, the cash 2 ow and the
interest rate for month k, respectively. The constant K0 := e? =2 is chosen
to normalize the log{normal distribution, i.e. E (ik ) = i0 . The initial interest
rate i0, the monthly payment c, and K1; K2; K3; K4 are further constants of the
model which are usually set to
(i0; c; K1; K2; K3; K4;  ) := (0:007; 1:0; 0:01; ?0:005; 10; 0:5; 0:0004)
For the numerical computation, the integral over IRd is transformed to an un-
weighted integral on [0; 1]d by a mapping i = G(xi) with G0 (xi) = g (i) which
takes a uniform distributed variable xi to an N (0;  ) distributed variable i ,
Z
PV = v(G(x1); : : :G(xd )) dxd : : :dx1:
[0;1]d
The most natural way of determining the interest rate uctuations is by a
random walk, i.e. using the recursive formula
ik = K0 ek ik?1 :
In the Brownian bridge discretization [6], however, the interest rate is deter-
mined from a future and a past value
p
ik = K0k eb(k) i0 := K0k e 21 (k+t +k?t )+ t=2k i0 :
p
So starting with b(0) := 0, b(d) := d  d the subsequent values to be com-
puted are b(d=2); b(d=4); b(3d=4); b(d=8); b(3d=8); : : : and so forth. This leads
to a concentration of the total variance in the rst steps of the discretization
which improves the rate of convergence of Quasi{Monte Carlo methods.

20
For sparse grid quadrature formulas there is no immediate advantage from
the Brownian bridge discretization since all dimensions are of equal importance.
However, it is possible to use adaptive strategies (see Section 5.2), which apply
lower degree quadrature formulas in less important dimensions.
In order to detect the most important dimensions we proceed as follows.
First we compute the integral using a Smolyak quadrature formula of level l.
Then we look at the change in the integral value when in Smolyak's construc-
tion a quadrature formula of level l + 1 is applied in the s-th dimension and
quadrature formulas of level l in all other dimensions. Figure 4 (left) shows these
changes of the integral value with l = 1 for a 256{dimensional problem where
the dimensions are sorted in the order of the Brownian Bridge construction.
We have used a problem for 21 13 years here since this simpli es the Brownian
bridge discretization. The construction of course can be generalized to arbitrary
dimensions.
0.1 1
random
0.01 regular
brown
0.1
0.001
Change
in 0.0001
Integral 0.01
Value 1e-05 Error

1e-06
0.001
1e-07

1e-08 0.0001
0 50 100 150 200 250 300 1 10 100 1000 10000 1e+05 1e+06 1e+07
Dimension Function calls

Figure 4: Importance of the dimensions in the Brownian bridge representation


(left) and function calls vs. error for the CMO problem (right).
Figure 4 (right) shows the performance of Smolyak's construction using
Gauss{Patterson formulas with and without the Brownian bridge discretiza-
tion using quadrature formulas of level l = 1 : : : 5 in the 30 most important
dimensions (in the sense of Figure 4 (left)) and of level l ? 1 in all other dimen-
sions. With this construction it is possible to win about one digit of accuracy
in comparison to the average error of a Monte Carlo method. The performance
is comparable to Quasi{Monte Carlo methods [6], but more elaborate adaptive
schemes can possibly further improve the results.

7 Concluding Remarks
We have shown various constructions for multivariate quadrature formulas on
sparse grids based on Newton{Cotes, Clenshaw{Curtis, Gauss and extended
Gauss formulas. We stated known results concerning computing cost and er-
ror bounds and indicated a numerically stable implementation. We presented
a generalization of Smolyak's construction which can take into account the
smoothness properties of the integrand varying with the dimension.
We have seen that sparse grid quadrature formulas apply very well to many
application problems which require the evaluation of multivariate integrals.

21
In most cases, the Patterson formulas perform best in comparison to Gauss{
Legendre, Clenshaw{Curtis or trapezoidal formulas considering the ratio of nec-
essary function evaluations and accuracy. Except for very high{dimensional
problems they outperform Quasi{Monte Carlo methods by about a factor of
2 in the tted convergence rate . Quasi{Monte Carlo methods in turn show
much better results than Monte Carlo methods.
From the error bounds in Section 4.4 one can expect an asymptotic ex-
ponential rate of convergence for smooth functions. This cannot be expected
for Monte Carlo and Quasi{Monte Carlo methods since they cannot take into
account the smoothness properties of the integrand. In our experiments we
see the expected asymptotic exponential convergence in the downward slope of
the error graphs, although the least squares tted convergence rates cannot
re ect it. Furthermore, sparse grid quadrature formulas perform much bet-
ter than other multivariate quadrature formulas already in the pre{asymptotic
range, which is important for the practical computation of high{dimensional
problems. This superiority of sparse grid quadrature formulas increases further
if higher accuracies are required.
Note nally that for non{smooth integrands sparse grid quadrature formulas
perform rather poor. Quasi{Monte Carlo methods obtain for these problems
often better results with respect to the ratio of cost vs. accuracy since their
smoothness requirements are lower in comparison to Smolyak's construction.
The comparison of the various univariate basis integration routines has
shown that nested quadrature formulas are the best choice for Smolyak's con-
struction. Of these, the Patterson formulas perform better than Clenshaw{
Curtis formulas in our computational examples. However, this superiority de-
grades with the dimension due to the identical (classical) polynomial exactness
of the formulas for l < d.
Especially for high{dimensional problems a careful adaption of the quadra-
ture formula to the smoothness of the integrand or is required in order to limit
the increase of the computational cost with the level l. For the path integral
and the CMO problem one possibility is the Brownian bridge construction [6],
which reduces the e ective dimension of the problem. Further acceleration can
be achieved by a (coarse grain) parallelization [24] of the method based on the
grids 1k1  : : :  1kd which is subject to future research.

References
[1] A.R. Barron: Approximation and estimation bounds for arti cial neural
networks, Machine Learning 14, pp. 115{133, 1994.
[2] G. Baszenski, F.{J. Delvos: Multivariate Boolean midpoint rules, in Nu-
merical Integration IV, H. Brass, G. Hammerlin (eds.), pp. 1{11, Birkhauser,
Basel, 1993.
[3] T. Bonk: A new algorithm for multi{dimensional adaptive numerical
quadrature, in Adaptive Methods { Algorithms, Theory and Applications,
W. Hackbusch, G. Wittum (eds.), pp. 54{68, Vieweg, Braunschweig, 1994.

22
[4] H. Brass, K.-J. Forster: On the estimation of linear functionals, Analysis
7, pp. 237{258, 1987.
[5] H.{J. Bungartz: Dunne Gitter und deren Anwendung bei der adaptiven
Losung der dreidimensionalen Poisson-Gleichung, Dissertation, Institut fur
Informatik, TU Munchen, 1992.
[6] R.E. Ca isch, W.J. Moroko , A. Owen: Valuation of mortgage backed
securities using Brownian bridges to reduce e ective dimension J. Comput.
Finance, to appear, 1998.
[7] C.W. Clenshaw, A.R. Curtis: A method for numerical integration on an
automatic computer, Numerische Mathematik 2, pp. 197{205, 1960.
[8] R. Cools, B. Maerten: Integration over an hypercube, Preliminary Re-
port, Departement Computerwetenschappen, Katholieke Universiteit Leuven,
1998.
[9] P.J. Davis, P. Rabinowitz: Methods of Numerical Integration, Academic
Press, New York, 1975.
[10] F.{J. Delvos: d{variate Boolean interpolation, J. Approx. Theory 34,
pp. 99-114, 1982.
[11] F.{J. Delvos, W. Schempp: Boolean methods in interpolation and ap-
proximation, Pitman Research Notes in Mathematics Series 230, Longman,
Essex, 1989.
[12] M.G. Du y: Quadrature over a pyramid or cube of integrands with a
singularity at a vertex, SIAM J. Num. Anal. 19, pp. 1260{1262, 1982.
[13] S. Elhay, J. Kautsky: A method for computing quadratures of the
Kronrod{Patterson type, Australian Computer Science Comm. 6(1), 15.1{
15.8, 1984.
[14] S. Elhay, J. Kautsky: Generalized Kronrod Patterson Type Imbedded
Quadratures, Applications of Mathematics, 37(2), pp. 81{103, 1992.
[15] K. Frank, S. Heinrich: Computing discrepancies of Smolyak quadrature
rules, J. Complexity 12, 1996.
[16] J. Garcke: Berechnung der kleinsten Eigenwerte der stationaren Schroe-
dingergleichung mit der Kombinationstechnik, Diplomarbeit, Institut fur
Angewandte Mathematik, Universitat Bonn, 1998, to appear.
[17] W.M. Gentleman: Implementing Clenshaw{Curtis Quadrature, Comm.
ACM 15, pp. 337-346, 1972.
[18] A. Genz, A.A. Malik: An Adaptive Algorithm for Numerical Integra-
tion over an N-Dimensional Rectangular Region, J. Comp. Appl. Math. 6,
pp. 295{302, 1980.

23
[19] A. Genz: A package for testing multiple integration subroutines, in Nu-
merical Integration, P. Keast, G. Fairweather (eds.), pp. 337{340, Kluwer,
Dordrecht, 1987.
[20] T. Gerstner: Adaptive Hierarchical Methods for Landscape Representa-
tion and Analysis, in Proc. Workshop on Process Modelling and Landform
Evolution, S. Hergarten (ed.), Springer, 1998, to appear.
[21] G.H. Golub, J. Kautsky: Calculation of Gauss quadratures with multiple
free and xed knots, Numerische Mathematik 41, pp. 147{163, 1983.
[22] W.J. Gordon: Blending function methods of bivariate and multivariate
interpolation and approximation, SIAM J. Num. Anal. 8, pp. 158{177, 1971.
[23] M. Griebel: A parallelizable and vectorizable multi{level algorithm on
sparse grids, in Parallel Algorithms for Partial Di erential Equations, Hack-
busch, W. (ed.), Notes on Numerical Fluid Mechanics 31, Vieweg, Braun-
schweig, 1991.
[24] M. Griebel: The combination technique for the sparse grid solution of
PDEs on multiprocessor machines, Parallel Processing Letters 2(1), pp. 61{
70, 1992.
[25] M. Griebel, M. Schneider, C. Zenger: A combination technique for the
solution of sparse grid problems, in Iterative Methods in Linear Algebra, R.
Bequwens, P. de Groen (eds.), pp. 263{281, Elsevier, North{Holland, 1992.
[26] M. Griebel, P. Oswald, T. Schiekofer: Sparse grids for boundary integral
equations, Numerische Mathematik, submitted, 1998.
[27] M. Griebel, G. Zumbusch: Adaptive Sparse Grids for Hyperbolic Conserva-
tion Laws, in Proc. of the 7th Int. Conf. on Hyperbolic Problems, Birkhauser,
Basel, submitted, 1998.
[28] K. Hallatschek: Fouriertransformation auf dunnen Gittern mit hierarchis-
chen Basen, Numerische Mathematik 63, pp. 83{97, 1992.
[29] N. Heuer, M. Maischak, E.P. Stephan: The hp{Version of the Boundary
Element Method for Screen Problems, Preprint IFAM7, Universitat Han-
nover, 1997.
[30] M. Kac: On some connections between probability theory and di erential
and integral equations, in Proceedings 2nd Berkeley Symp. Math. Stat. Prob.,
J. Neyman (ed.), Univ. of California Press, Berkley, 1951.
[31] A.S. Kronrod: Nodes and Weights of Quadrature Formulas, Consultants
Bureau, New York, 1965.
[32] H.N. Mhaskar: Neural Networks and Approximation Theory, Neural Net-
works 9, pp. 711{722, 1996.

24
[33] G. Monegato: Stieltjes polynomials, and related quadrature rules, SIAM
Rev. 24(2), pp. 137{158, 1982.
[34] W.J. Moroko , R.E. Ca isch: Quasi{Monte Carlo Integration, J. Comp.
Phys. 122, pp. 218{230, 1995.
[35] H. Niederreiter: Random Number Generation and Quasi-Monte Carlo
Methods, SIAM, Philadelphia, 1992.
[36] E. Novak, K. Ritter: Global optimization using hyperbolic cross points, in
State of the Art in Global Optimization, C.A. Floudas, P.M. Parlados (eds.),
pp. 19{33, Kluwer, Dordrecht, 1996.
[37] E. Novak, K. Ritter: High dimensional integration of smooth functions
over cubes, Numerische Mathematik 75, pp. 79{97, 1996.
[38] E. Novak, K. Ritter: The curse of dimension and a universal method
for numerical integration, in Multivariate Approximation and Splines, G.
Nurnberger, J.W. Schmidt, G. Walz (eds.), to appear, 1998.
[39] E. Novak, K. Ritter: Simple cubature formulas for d{dimensional inte-
grals with high polynomial exactness and small error, Report, Institut fur
Mathematik, Universitat Erlangen{Nurnberg, 1997.
[40] E. Novak, K. Ritter, A. Steinbauer: A Multiscale Method for the Evalua-
tion of Wiener Integrals, J. Approx. Theory, submitted, 1998.
[41] S. Paskov: Average case complexity of multivariate integration for smooth
functions, J. Complexity 9, pp. 291{312, 1993.
[42] S. Paskov, J.F. Traub: Faster valuation of nancial derivatives, J. Portfolio
Management 22, pp. 113{120, 1995.
[43] T.N.L. Patterson: The Optimum Addition of Points to Quadrature For-
mulae, Math. Comp. 22, pp. 847{856, 1968.
[44] T.N.L. Patterson: Algorithm 672: Generation of interpolatory quadrature
rules of the highest degree of precision with preassigned nodes for general
weight functions, ACM Trans. Math. Software 15(2), pp. 137{143, 1989.
[45] T.N.L. Patterson: Modi ed optimal quadrature extensions, Numerische
Mathematik 64, pp. 511-520, 1993.
[46] S.V. Pereverzev: An estimate of the complexity of the approximate so-
lution of Fredholm equations of the second kind with di erentiable kernels,
Ukr. Math. J. 41(2), pp. 1225{1227, 1989.
[47] R. Piessens, M. Branders: A Note on the Optimum Addition of Abscis-
sas to Quadrature Formulas of Gauss and Lobatto Type, Math. Comp. 28,
pp. 135{140, 344{347, 1974.

25
[48] P. Rabinowitz, S. Elhay, J. Kautsky: Empirical mathematics: the rst
Patterson extension of Gauss-Kronrod rules, Int. J. Computer Math. 36,
pp. 119{129, 1990.
[49] I. Robinson, A. Begumisa: Suboptimal Kronrod extension formulae for
numerical quadrature, Numerische Mathematik 58, pp. 807{818, 1991.
[50] I.H. Sloan, S. Joe: Lattice Methods for Multiple Integration, Oxford
University Press, Oxford, 1994.
[51] S.A. Smolyak: Quadrature and interpolation formulas for tensor products
of certain classes of functions, Dokl. Akad. Nauk SSSR 4, pp. 240{243, 1963.
[52] F. Sprengel: Interpolation und Waveletzerlegung multivariater periodis-
cher Funktionen, Dissertation, Institut fur Mathematik, Universitat Rostock,
1997.
[53] V.N. Temlyakov: Approximation of Periodic Functions, Nova Science
Publishers, New York, 1994.
[54] J.F. Traub, G.W. Wasilkowski, H. Wozniakowski, Information{Based
Complexity, Academic Press, New York, 1988.
[55] P. Van Dooren, L. De Ridder; An adaptive algorithm for numerical inte-
gration over an n{dimensional cube, J. Comp. Appl. Math. 2, pp. 207{217,
1976.
[56] G. Wahba: Interpolating surfaces: high order convergence rates and their
associated design, with applications to X{ray image reconstruction, Report,
Dept. of Statistics, University of Wisconsin, Madison, 1978.
[57] G.W. Wasilkowsi, H. Wozniakowski: Explicit cost bounds of algorithms
for multivariate tensor product problems, J. Complexity 11, pp. 1{56, 1995.
[58] Zenger, C.: Sparse grids, in Hackbusch, W. (ed.): Parallel Algorithms
for Partial Di erential Equations, Notes on Numerical Fluid Mechanics 31,
Vieweg, Braunschweig, 1991.

26

You might also like