Wavelet Monte Carlo Methods For The Global Solution of Integral Equations
Wavelet Monte Carlo Methods For The Global Solution of Integral Equations
Wavelet Monte Carlo Methods For The Global Solution of Integral Equations
of Integral Equations
Stefan Heinrich
Fachbereich Informatik
Universit at Kaiserslautern
D-67653 Kaiserslautern, Germany
e-mail: [email protected]
Abstract. We study the global solution of Fredholm integral equations of the second kind
by the help of Monte Carlo methods. Global solution means that we seek to approximate the
full solution function. This is opposed to the usual applications of Monte Carlo, were one
only wants to approximate a functional of the solution. In recent years several researchers
developed Monte Carlo methods also for the global problem.
In this paper we present a new Monte Carlo algorithm for the global solution of integral
equations. We use multiwavelet expansions to approximate the solution. We study the be-
haviour of variance on increasing levels, and based on this, develop a new variance reduction
technique. For classes of smooth kernels and right hand sides we determine the convergence
rate of this algorithm and show that it is higher than those of previously developed algorithms
for the global problem. Moreover, an information-based complexity analysis shows that our
algorithm is optimal among all stochastic algorithms of the same computational cost and that
no deterministic algorithm of the same cost can reach its convergence rate.
1 Introduction
We are concerned with the randomized solution of linear integral equations
u(s) = f(s) +
G
k(s, t)u(t) dt (1)
where G IR
d
is a bounded domain, f and k are given functions on G and G
2
,
respectively, and u is the unknown function on G (detailed conditions will be given
later). Examples of (1) include neutron transport, radiative heat transfer, and light
transport. The function f is the source density and k(s, t) describes the transition
from state t to state s. Usually, d is high, e. g., d = 6 in neutron transport, so
Monte Carlo methods are the preferred choice. In their standard form these methods
provide statistical estimates of one (or a few) functional(s) of the solution u, such as
the value u(t
0
) in a given point or a weighted mean
G
u(t)g(t) dt
with g a given function. It is well-known and supported both experimentally and
theoretically, that for higher d, Monte Carlo is superior to deterministic methods in
this task.
But what happens, if we seek to approximate the full solution function u instead
of just a functional of it? Monte Carlo approaches to the full solution were developed
by Frolov and Chentsov [4], Sobol [13], Mikhailov [10], Voytishek [15,16], Prigarin
[11]. There are two basic ways:
1. Fix a grid G and obtain Monte Carlo estimates of u(s) for s . Then
extend the function from to all of G by an interpolation or approximation
procedure.
2. Use a basis sequence, say, an orthonormal system, to approximate
u
n
i=1
(u, z
i
)z
i
and estimate (u, z
i
) by Monte Carlo.
Note that in both cases, the overall number of samples is O(nN), where n is the
number of functionals to be estimated (n = [[ in case 1) and N is the number of
samples for each functional.
In this paper we are concerned with the second approach (basis sequences).
We present a new algorithm which improves upon the above by keeping the same
precision but reducing the arithmetic cost considerably. This is achieved by using
multiwavelet expansions and variance reduction tuned to their levels. For multilevel
versions of the rst approach (grid-interpolation) we refer to Heinrich [6,7].
2 Multiwavelets
In this section we briey present the required facts about multiwavelets, a construc-
tion which goes back to Alpert [1]. For rst applications to Monte Carlo see Heinrich
[5]. We restrict ourselves to G = [0, 1]
d
. Fix an integer r 0. For = 0, 1, . . . (the
level index), let
= G
i
: i I
= 0, . . . , 2
1
d
, where for i = (i
1
, . . . , i
d
)
I
,
G
i
= A
i
1
A
i
d
,
with A
j
= [2
j, 2
(j + 1)) if j < 2
1 and A
j
= [2
j, 2
(j + 1)] if
j = 2
1. Let o
r
(
) i f[
G
i
T
r
(G
i
) (i I
),
and T
r
(G
i
) is the space of polynomials of maximum-degree r (so, e. g., T
1
(G
i
)
is the set of multilinear functions on G
i
). The idea of the construction is the fol-
lowing: Choose any orthonormal basis w
1
, . . . , w
q
of o
r
(
0
) (with respect to the
L
2
(G) norm) and extend it by w
q+1
, . . . , w
q
to an orthonormal basis of o
r
(
1
).
Now we repeat this process on higher levels: Assume that we have already con-
structed a basis of o
r
(
(G) L
(G)
be the contraction operators dened for f L
(G) by
(C
i
f)(t) =
f(2
t i) for t G
i
0 otherwise.
Put
z
0ij
= w
j
(i I
0
, j = 1, . . . , q)
and, for 1,
z
ij
= 2
(1)d/2
C
1,i
w
q+j
(i I
1
, j = 1, . . . , q
q).
Clearly, z
ij
: m, i, j as above is an orthonormal basis of o
r
(
m
), and z
ij
vanishes outside of G
i
. Note that for r = 0 we get the d-dimensional Haar basis.
We also need related interpolation operators. Let
) for
r = 0. Let C(G) denote the space of continuous functions on G and let
P
: C(G) o
r
(
)
be the piecewise multivariate Lagrange interpolation on
: C(G) L
(G)| c
1
and for f C
(G),
|f P
f|
L
(G)
c
2,
2
|f|
C
(G)
, (2)
where r +1 is a positive integer, C
(G)
= max
||
sup
tG
[(D
f)(t)[,
and the constants do not depend on . Observe that, due to the local structure of
the interpolation, (P
G
k(s, t)g(t) dt
for g L
(G), is bounded in L
(G), where I denotes the identity operator. Hence, (1) is uniquely solvable.
We x a nal level m and approximate u by
u u
m
=
m
=0
i,j
(u, z
ij
)z
ij
,
where (u, z
ij
) denotes the scalar product of L
2
(G). Clearly, u
m
is the orthogonal
projection of u onto o
r
(
m
). Our algorithm will approximate u
m
by Monte Carlo.
For this purpose, suppose that
ij
is a stochastic estimate of (u, z
ij
), that is,
ij
is a random variable on some (universal for the whole algorithm) probability space
(, , ) such that
IE
ij
= (u, z
ij
).
The construction of
ij
is the crucial point and will be described later. We dene a
vector valued random variable
i,j
ij
z
ij
o
r
(
) o
r
(
1
) (3)
representing the contribution to level . Assume that the
( = 0, . . . , m) are
independent. Fix natural numbers N
(a)
(a = 1, . . . , N
) be independent copies of
(so that
(a)
: = 0, . . . , m, a =
1, . . . , N
=0
1
N
a=1
(a)
=
m
=0
i,j
1
N
a=1
(a)
ij
z
ij
, (4)
where
(a)
ij
denote the components of
(a)
. It follows that
IE =
m
=0
i,j
(u, z
ij
)z
ij
= u
m
. (5)
Hence, our algorithm produces a biased, vector valued estimate. So far this is noth-
ing but a formal splitting over the levels. The key point is the construction of
ij
tuned to the level . For this sake we note that since z
ij
is orthogonal to o
r
(
1
),
(u, z
ij
) = (u P
1
u, z
ij
), (6)
for 1. We use this relation in order to modify the standard von Neumann Ulam
collision estimate (see e. g. Spanier and Gelbard [12], Ermakov and Mikhailov [3])
in the following way. First we introduce functions g
(s) and h
(s, t), = 0, . . . , m,
by setting g
0
= f, h
0
= k, and for 1,
g
= f P
1
f
h
(, t) = k(, t) P
1
k(, t).
Due to our requirements on f and k made at the beginning of this chapter, g
and h
are well-dened. Next we dene a Markov chain on G. For this purpose let p
ij
, p
,
and p be arbitrary measurable, non-negative functions on G and G
2
, respectively,
satisfying
G
p
ij
(t) dt = 1,
G
p
(s, t) dt 1,
G
p(s, t) dt 1,
p
ij
(s) ,= 0 whenever z
ij
(s) ,= 0,
p
(s, t) ,= 0 whenever h
(s, t) ,= 0, and
p(s, t) ,= 0 whenever k(s, t) ,= 0,
for almost all s G and (s, t) G
2
, respectively. We also assume that the spectral
radius of T
p
in L
)
be such a trajectory. That is, t
G
p
(s, t) dt
and
1
G
p(s, t) dt,
respectively. We dene the modied collision estimate as follows. Put
ij
=
z
ij
(t
0
)
p
ij
(t
0
)
(t
0
) +
h
(t
0
, t
1
)
p
(t
0
, t
1
)
=1
f(t
=2
k(t
1
, t
)
p(t
1
, t
.
It is checked in the usual way, that
IE
ij
= (u, z
ij
),
provided the spectral radius of T
|k|
in L
/p
and k
2
/p dene bounded
operators in L
(G)
)
1/2
.
Denote the norm variance of
by
v
= IE|
IE
|
2
L
(G)
,
and the metric distance of u to o
r
(
m
) in the norm of L
(G) by
dist(u, o
r
(
m
), L
(G)).
Lemma 1. There is a constant c > 0 depending only on r and d such that
e() c
dist(u, o
r
(
m
), L
(G)) + (m
m
=0
v
N
1
)
1/2
.
Proof. By (5) and the triangle inequality we have
e() |u u
m
|
L
(G)
+ (IE| IE|
2
L
(G)
)
1/2
.
To estimate the deterministic component, let Q
m
denote the orthogonal projection
from L
2
onto o
r
(
m
). Using a local representation of Q
m
(see [5], section 8), it is
readily shown that
sup
m
|Q
m
: L
(G) L
(G)| < .
It follows that
|u u
m
|
L
(G)
= |(I Q
m
)u|
L
(G)
(1 +|Q
m
: L
(G) L
(G)|) dist(u, o
r
(
m
), L
(G)).
Let us now turn to the stochastic part. Here we have to estimate the variance of sums
of independent, vector valued random variables. Recall that
=
m
=0
1
N
a=1
(a)
,
the
(a)
) o
r
(
m
).
Since o
r
(
m
), considered in the L
-sum of 2
dm
copies of o
r
(
0
) (just use the fact that there is no correlation between functions on
the subcubes of
m
), it follows that the type 2 constant of o
r
(
m
) behaves like
(log dimo
r
(
m
))
1/2
m
1/2
,
see Ledoux and Talagrand [9] for the notion of type 2 constants. (The standard
notation means that both quantities are equal up to a factor which can be bounded
form above and below by positive constants not depending on m.) Then Proposition
9.11 of [9] gives
IE| IE|
2
L
(G)
cm
m
=0
1
N
2
a=1
IE|
(a)
IE
(a)
|
2
L
(G)
= cm
m
=0
v
N
1
. (7)
This proves the lemma.
The subsequent convergence analysis is carried out for a model class of smooth
kernels and right hand sides. It simplies the analysis, while the essential features of
balancing variance over the levels become more transparent. Moreover, this class is
well-studied from the point of view of information-based complexity, which allows
us to formulate optimality results and comparisons with the deterministic setting.
To dene the class, we x a positive integer the degree of smoothness, and real
parameters
1
,
3
> 0, 0 <
2
< 1, and put
/ = k C
(G
2
) : |k|
C
(G
2
)
1
, |k|
L
(G
2
)
2
T = f C
(G) : |f|
C
(G)
3
.
We choose the following algorithm parameters:
r = 1, p
ij
= [G
i
[
1
G
i
,
p = p
)
m
=0
such
that for all k / and f T the multiwavelet Monte Carlo algorithm has cost at
most M and error
e() cM
/d
(log M)
/d
.
Proof. Fix a non-negative integer m, to be chosen later. There is a constant c > 0
such that for all k / and f T, u = (I I
k
)
1
f satises
|u|
C
(G)
c.
(We shall use the same symbol c for possibly different constants, all independent of
M and m.) Together with (2) this yields
dist(u, o
r
(
m
), L
(G)) c2
m
. (8)
Again from (2) and the assumptions on k and f we derive
|g
|
L
(G)
c2
and
|h
|
L
(G
2
)
c2
.
Since, by our choice,
k(s, t)
p(s, t)
< 1
and
|z
ij
|
L
(G)
c2
d/2
,
we get
[
ij
[ c2
d/2
and hence, by the disjointness (up to sets of measure zero) of the supports of z
ij
,
|
|
L
(G)
c2
.
This implies v
c2
2
, and the lemma above gives
e() c
2
m
+ (m
m
=0
2
2
N
1
)
1/2
. (9)
Since the expected length of the Markov chain depends only on , a realization of
the random variable
ij
can be computed at expected cost O(1). The variable
has O(2
d
) components, so the cost of computing
is O(2
d
). Fixing an upper
bound M of the overall cost, we minimize the right hand side of (9). In a rst step
we leave m xed and minimize the second summand,
m
m
=0
2
2
N
1
=0
2
d
N
M. (10)
Note that since we are only interested in the order of these quantities, we can neglect
constant factors. With this in mind, we can write the solution of the minimization as
N
2
(d/2)(m)d
M.
This choice gives
m
m
=0
2
2
N
1
cmM
1
2
(d2)m
.
Next, m has to be chosen in such a way that deterministic and stochastic error (i. e.
both summands on the right hand side of (9)) are in balance:
mM
1
2
(d2)m
2
2m
,
thus
m2
dm
M,
which is satised iff
2
dm
M(log M)
1
,
and as the nal error bound we obtain
e() cM
/d
(log M)
/d
,
which proves the theorem.
This result should be compared with the lower bound on the same class of -
smooth kernels and right hand sides obtained by the author in [6]. For the precise
framework we refer to that paper and to Traub, Wasilkowski and Wo zniakowski
[14].
Theorem 3 ([6]). Let d > 2. There is a constant c > 0 such that for all M IN
the following holds: No stochastic algorithm of cost at most M can have a smaller
error than
cM
/d
(log M)
/d
.
It follows that our algorithm is of optimal order on the class (/, T). A different
optimal multilevel algorithm using interpolation instead of wavelet decompositions
was given by the author in [6], [7].
We are also able to compare the behaviour of stochastic and deterministic al-
gorithms. The following lower bound (which is, in fact, the optimal order) was ob-
tained by Emelyanov and Ilin [2]. For the framework we refer to [2], [6] or [14].
Theorem 4 ([2]). There is a constant c > 0 such that for all M IN the following
holds: No deterministic algorithm of cost at most M can have a smaller error than
cM
/(2d)
.
Hence the rate of convergence of the multiwavelet Monte Carlo algorithm on the
class of smooth inputs is roughly the square of that of the best deterministic algo-
rithm (this assumes, of course, that we accept the comparison between deterministic
and stochastic error criteria).
Let us now compare our multiwavelet algorithm with the previously developed
one-level procedures mentioned in section 1 under points 1 and 2. If the ingredi-
ents are optimally chosen, the deterministic part of the error has the same estimate
n
/d
for -smooth functions. The variance of the estimators for u(s) or (u, z
i
)z
i
is usually (1) (even if one uses a wavelet expansion and applies the standard esti-
mators without the modications described in section 3). Hence the total error is of
the order
n
/d
+N
1/2
(log n)
1/2
.
Minimizing with respect to the cost constraint nN M gives
M
d+2
(log M)
d+2
,
a rate worse than that of our new algorithm, but because of the condition d > 2
still better than that of the best deterministic algorithm.
Let us nally mention that the convergence analysis in the case d 2 can be
carried out similarly. However, for d < 2 the algorithm is no longer optimal. An
optimal algorithm for this case is given in [6]. To produce a corresponding multi-
wavelet algorithm, we need to combine the approach of the present paper with the
separation of main part technique in [6], rst developed in [8].
References
1. B. K. Alpert. A class of bases in L
2
for the sparse representation of integral operators.
SIAM J. Math. Anal., 24: 246 262, 1993.
2. K. V. Emelyanov and A. M. Ilin. On the number of arithmetic operations, necessary for
the approximate solution of Fredholm integral equations of the second kind. Zh. Vychisl.
Mat. i Mat, Fiz., 7: 905 910, 1967 (in Russian).
3. S. M. Ermakov and G. A. Mikhailov. Statistical Modelling. Nauka, Moscow, 1982 (in
Russian).
4. A. S. Frolov and N. N. Chentsov. On the calculation of certain integrals dependent on a
parameter by the Monte Carlo method. Zh. Vychisl. Mat. Mat. Fiz., 2(4): 714 717, 1962
(in Russian).
5. S. Heinrich. Random approximation in numerical analysis. In: K. D. Bierstedt, A. Pietsch,
W. M. Ruess, and D. Vogt, editors, Functional Analysis, pages 123 171. Marcel Dekker,
1994.
6. S. Heinrich. Monte Carlo complexity of global solution of integral equations. J. Com-
plexity, 14: 151 175, 1998.
7. S. Heinrich. A multilevel version of the method of dependent tests. In: S. M. Ermakov,
Y. N. Kashtanov, and V. B. Melas, editors, Proceedings of the 3rd St. Petersburg Workshop
on Simulation, pages 31 35. St. Petersburg University Press, 1998.
8. S. Heinrich and P. Math e. The Monte Carlo complexity of Fredholm integral equations.
Math. of Computation, 60: 257 278, 1993.
9. M. Ledoux and M. Talagrand. Probability in Banach Spaces. Springer, Berlin
HeidelbergNew York, 1991.
10. G. A. Mikhailov. Minimization of Computational Costs of Non-Analogue Monte Carlo
Methods. World Scientic, Singapore, 1991.
11. S. M. Prigarin. Convergence and optimization of functional estimates in statistical mod-
elling in Sobolevs Hilbert spaces. Russian J. Numer. Anal. Math. Modelling, 10(4): 325
346, 1995.
12. J. Spanier and E. M. Gelbard, Monte Carlo Principles and Neutron Transport Problems.
Addison-Wesley, Reading, Massachusetts, 1969.
13. I. M. Sobol. Computational Monte Carlo Methods. Nauka, Moscow, 1973 (in Russian).
14. J. F. Traub, G. W. Wasilkowski, and H. Wo zniakowski. Information-Based Complexity.
Academic Press, New York, 1988.
15. A. V. Voytishek. Asymptotics of convergence of discretely stochastic numerical methods
of the global estimate of the solution to an integral equation of the second kind. Sibirsk.
Mat. Zh., 35: 728 736, 1994. (in Russian).
16. A. V. Voytishek. On the errors of discetely stochastic procedures in estimating globally
the solution of an integral equation of the second kind. Russian J. Numer. Anal. Math.
Modelling, 11: 71 92, 1996.