Where can buy Random Walks on Boundary for Solving PDEs Karl K. Sabelfeld ebook with cheap price
Where can buy Random Walks on Boundary for Solving PDEs Karl K. Sabelfeld ebook with cheap price
Where can buy Random Walks on Boundary for Solving PDEs Karl K. Sabelfeld ebook with cheap price
https://ebookultra.com
https://ebookultra.com/download/random-walks-on-
boundary-for-solving-pdes-karl-k-sabelfeld/
https://ebookultra.com/download/asymptotic-analysis-of-random-walks-
heavy-tailed-distributions-1st-edition-a-a-borovkov/
ebookultra.com
https://ebookultra.com/download/ottoman-rule-in-
damascus-1708-1758-karl-k-barbir/
ebookultra.com
https://ebookultra.com/download/adaptive-numerical-solution-of-
pdes-1st-edition-peter-deuflhard/
ebookultra.com
Evidence and Procedures for Boundary Location 6th Edition
Walter G. Robillard
https://ebookultra.com/download/evidence-and-procedures-for-boundary-
location-6th-edition-walter-g-robillard/
ebookultra.com
https://ebookultra.com/download/random-fields-on-the-sphere-
representation-limit-theorems-and-cosmological-applications-1st-
edition-peccati/
ebookultra.com
https://ebookultra.com/download/forensic-procedures-for-boundary-and-
title-investigation-1st-edition-donald-a-wilson/
ebookultra.com
https://ebookultra.com/download/field-guide-to-probability-random-
processes-and-random-data-analysis-larry-c-andrews/
ebookultra.com
https://ebookultra.com/download/dispatches-for-the-new-york-tribune-
karl-marx/
ebookultra.com
Random Walks on Boundary for Solving PDEs Karl K.
Sabelfeld Digital Instant Download
Author(s): Karl K. Sabelfeld; Nikolai A. Simonov
ISBN(s): 9783110942026, 311094202X
Edition: Reprint 2012
File Details: PDF, 3.64 MB
Year: 2013
Language: english
Random Walks on Boundary for Solving PDEs
Random Walks
on Boundary for
Solving PDEs
K.K. Sabelfeld
and
N.A. Simonov
m ys?m
Utrecht, The Netherlands, 1994
VSP B V
P.O. Box 346
3700 AH Zeist
The Netherlands
© VSP BV 1994
ISBN 90-6764-183-9
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, electronic, mechanical, photocopy-
ing, recording or otherwise, without the prior permission of the copyright owner.
Sabelfeld, K.K.
1. Introduction 1
Bibliography 136
The monograph presents new probabilistic representations for classical boundary value
problems of mathematical physics. When comparing with the well known probabilistic rep-
resentations in the form of Wiener and diffusion path integrals, the trajectories of random
walks in our representations are simulated on the boundary of the domain as Markov chains
generated by the kernels of the boundary integral equations equivalent to the original boundary
value problem. The Monte Carlo methods based on the walk on boundary processes have a
series of advantages: (1) high-dimensional problems can be solved, (2) the method is grid-free
and gives the solution simultaneously in arbitrary points, (3) external and internal boundary
value problems are solved using one and the same random walk on boundary process, (4) when
comparing with the classical probabilistic representations, there is no ε-error generated by the
approximations in the ε-boundary, and (5) parallel implementation of the walk on boundary
algorithms is straightforward and much easier.
This is the first book devoted to the walk on boundary algorithms. First introduced by
K. Sabelfeld for solving the interior and exterior boundary value problems for the Laplace and
heat equations, the method was then extended to all the main boundary value problems of the
potential and elasticity theories.
For specialists in applied and computational mathematics, applied probabilists, for students
and post-graduates studying new numerical methods for solving PDEs.
k e y w o r d s : Markov chains, double layer potentials, heat and elasticity potentials, boundary
integral equations, random estimators, random walk on boundary .
Chapter 1
Introduction
It is well known that the random walk methods for boundary value problems (BVP) for
high-dimensional domains with complex boundaries are quite efficient, especially if it is
necessary to find the solution not in all the points of a grid, but only in some marked
points of interest. One of the most impressive features of the Monte Carlo methods is the
possibility to calculate probabilistic characteristics of the solutions to BVPs with random
parameters (random boundary functions, random sources, and even random boundaries).
Monte Carlo methods for solving PDEs are based:
(a) on classical probabilistic representations in the form of Wiener or diffusion path
integrals,
(b) on probabilistic interpretation of integral equations equivalent to the original BVP
which results in representations of the solutions as expectations over Markov chains.
In the approach (a), diffusion processes generated by the relevant differential oper-
ator are simulated using numerical methods for solving ordinary stochastic differential
equations (Friedmann, 1976). To achieve the desired accuracy, it is necessary to take the
discretization step small enough, which results in long random trajectories simulated.
For PDEs with constant coefficients, however, it is possible to use the strong Markov
property of the Wiener process and create much more efficient algorithms first constructed
for the Laplace equation (Müller, 1956) known as the walk on spheres method (WSM).
This algorithm was later justified in the framework of the approach (b) by passing
to an equivalent integral equation of the second kind with generalized kernel and using
the Markov chain which "solves" this equation. This approach was developed to general
second order scalar elliptic equations, high-order equations related to the Laplace oper-
ator and some elliptic systems (Sabelfeld, 1991). We now shortly present two different
approaches for constructing and justifying the walk on spheres algorithm: the first, con-
ventional, coming from the approach (a), and the second, based on a converse mean value
relation.
Let us start with a simple case, the Dirichlet problem for the Laplace equation in a
bounded domain G C R3:
Δ « ( ζ ) = 0, χ e G, (1.1)
"(y) = v>(y), y € Γ = dG. (1.2)
Let wx(t) be the Wiener process starting at the point χ € G, and let rp be the first
passage time (the time of the first intersection of the process wx(t) with the boundary Γ).
We suppose that the boundary Γ is regular so that (1.1) and (1.2) has a unique solution.
Then (Dynkin, 1963)
u ( x ) = Ε χ φ ( ι υ χ ( τ τ ) ) . ( 1 . 3 )
Note that in (1.3), only the random points on the boundary are involved. We thus can
formulate the following problem: how to find these points without explicit simulation of
the Wiener process inside the domain G ?.
This problem was solved in (Müller, 1956) using the following considerations. In sphere
S(x, d{x)) representation (1.3) gives:
The same representation is valid for all points y ζ S ( x , d ( x ) ) , so we can use the strong
Markov property and write the conditional expectation:
We can iterate this representation many times and remark that only random points lying
on the spheres S ( x , d ( x ) ) , S(y,d(y)), . . . , are involved. It is well known that the points
W x ( s ( x , d ( x ) ) ) are uniformly distributed over the sphere
T S ( x , d ( x ) ) .
Thus we came to the definition of the walk on spheres process starting at x: it is defined
as the homogeneous Markov chain W S — W S { x o , x\,..., x , •. •} such that x = χ and k 0
Xk = X k - i + d ( x k - i ) u k , k = 1,2,...,
where {ω^} is a sequence of independent unit isotropic vectors. It is known (Müller, 1956)
that Xk —* y € Γ as k —* oo, however the number of steps of the Markov chain WS is
infinite with probability one. Therefore, an £-spherical process is introduced as follows.
Let Nc = inf{n : d{xn) < ε } , then the ε-spherical process WS€ = {xo, Xi, • • •, is
obtained from WS by stopping after Νε steps.
Let XNt be any point on S(xnc,ά(χκ,)) Π Γ, then we set
¿ ( x ) = u ( x N t ) , ξ ( χ ) = ψ ( χ Ν , ) . ( 1 . 6 )
1959). Especially,
and
1. Introduction 3
Εχξ{χ) = u(x).
Let
L = sup ||z — t/||,M = s u p φ ( χ )
and
(1.7)
for each χ £ G and for all spheres S(x, r) contained in G\ uim is the surface area of the unit
sphere S(x, 1) in Rm. Besides, the mean value relation (1.7) characterizes the solutions to
(1.1). We need a stronger result, which presents an equivalent formulation of the problem
(1.1), (1.2).
P r o p o s i t i o n 1.1 Integral Formulation . We suppose that the problem (1.1), (1.2) has
a unique solution for any continuous function φ. Suppose that there exists a function
ν € C(G | j r ) , u|r = φ, v(x) = u(x) in Γ ε (ε > 0) such that the mean value relation
holds at every point χ £ G\Te for the sphere S(x,d(x)). Then v(x) is the unique solution
to the problem (1.1), (1.2).
P r o o f . The proof uses the maximum property (Courant and Hilbert, 1989). Since u also
satisfies the mean value relation for every sphere contained in G, we conclude that the
function u — ν satisfies the mean value relation at every point χ £ G \TC. Let F be a
closed set of points χ € G \ Γ £ where u — ν attains its maximum M. Let xo be a point
of F which has the minimal distance from the surface Y = {y : d(y) = e}. If XQ were
an interior point of χ £ G \ Γ £ , we could find a sphere S(xo, r0) C G for which the mean
value relation holds, thus u — ν = M inside S(x0, r 0 ). Therefore, x 0 should belong to Y.
We repeat the same argument for the minimum of u — v. Since (u — u)|y = 0, we have
ν = u in G. •
4 1. Introduction
Using this proposition, we rewrite an integral equation equivalent to the problem (1.1),
(1.2).
Let 6x(y) be a generalized density describing the uniform distribution on a sphere
S(x,d(x)), and let us define the kernel function as follows:
JO if ζ € G \ Γε,
M X ; ~ \ u{x) if χ € Γ,.
Proposition 1.2. For any ε > 0, integral equation (1.9) has the unique solution given by
oo
u(x) = fe(x) + Y/K'Je(x). (1.11)
¿=1
it is sufficient to prove that H ^ < 1 (this fact also implies the uniqueness of the
solution to (1.11)). Let ν(ε) = ε 2 /4(d*) 2 . For χ € G \ Γ ε we have
N o w w e remark that, to use (1.9) for numerical purposes, it is necessary to know the
solution to the boundary problem in Γ £ . However, we obtain an approximation if we take
in Γ £
u ( i ) = / « ( * ) » ¥>(*), i e r t , (1.13)
where χ is the point of Γ nearest to z , since u £ C(G (J Γ ) . Instead of (1.13) we could use
any continuous extension of φ to Γ £ (the ideal case is the harmonic extension).
H a v i n g (1.14) and the convergence of the Neumann series, it is possible to use the
standard M o n t e Carlo estimators for the integral equations of the second kind (see e.g.,
Ermakov and Mikhailov, 1982). If we choose the transitional density in each sphere in
accordance with the kernel (i.e., uniformly on the surface of a sphere), and introduce
absorption in Γ £ with probability one (no absorption in G \ Γ £ ) , w e exactly obtain the
unbiased estimators ξ(χ) and ζ(χ) . Estimations for the variance of the estimators of the
solutions to the integral equations can be obtained from the analysis of kernel k^/p where
ρ is a transitional density. This kernel in our case coincides with the kernel of the original
integral equation, that leads to the convergence of the Neumann series representing the
variance.
Thus we see that in the walk on spheres algorithms we have no necessity to simulate
long trajectories of the Wiener process. Nevertheless, we have to construct a series of
random points distributed inside the domain.
W e now shortly present the idea of the walk on boundary algorithm for solving (1.1),
(1.2). W e suppose for simplicity that the domain is convex and the solution can be
represented in the f o r m of a double layer potential ( V l a d i m i r o v , 1981):
«<*> = / ( U 5 >
where φυχ is the angle between the vector χ — y and ny, the interior normal vector at the
point y € Γ , and μ(ί/) is a continuous function satisfying the boundary integral equation:
6 1. Introduction
L1
μ(ν) = - J p(y,v'My')d<r(v') + <p(y), ( 6)
where
, COS <*Py>y
It is clear that
díl = (ï
deT y cosVy*
\x-y\2
is a solid angle of view of the surface element da(y) from the point χ € G. Thus the
isotropic distribution of y in angular measure Πχ at the point χ corresponds to the dis-
tribution of y on Γ with the density
l> Po(x,y)da(y) - 1.
^ίι+1 — Yn + l^n+1 — Yn
in
where { ^ J ^1Lm—1
Q 1 is a sequence of independent unit isotropic vectors in R3. On the process
WBx{Yo, y 2 , . . . , y m } we can construct the following random estimator (m > 2) :
and the larger ra, the higher is the accuracy of this representation. More exactly, to
achieve the accuracy ε, the cost is O ^ ^ ^ ^ which shows that this algorithm has a high
efficiency.
Remarkably, the same walk on boundary process can be used to solve interior and
exterior Dirichlet, Neumann and the third boundary value problems. Note also that the
method is grid-free but gives the solution simultaneously in arbitrary points prescribed.
When comparing with the classical probabilistic representations, there is no e-error coming
1. Introduction 7
from the approximation in Γ ε . Note also that the parallelization of the walk on boundary
algorithms is very simple, since the length of all the trajectories is one and the same.
In the present book we construct and justify the walk on boundary algorithms for
three classes of PDEs: (1) classical stationary potential problems (Chapter 3), (2) heat
equation (Chapter 4), and (3) spatial elasticity problems (Chapter 5). The basic Monte
Carlo methods using Markov chains for solving integral equations are presented in Chapter
2. In Chapter 6 we discuss different aspects related to the walk on boundary algorithms
(the Robin problem and the ergodicity theorem, evaluation of derivatives, etc.). Nonlinear
problems are treated in Chapter 7.
Chapter 2
Conventional Monte Carlo methods for solving integral equations of t h e second kind are
based on t h e Neumann series' representation of the solution and, consequently, they are
applicable only if t h e simple iterations are convergent. This, in t u r n , happens, when t h e
spectral radius of t h e integral operator is less than 1 (more exact formulations are given
in the next section). However, the boundary integral equations of t h e potential theory
can't be treated in t h e framework of this conventional scheme, since t h e spectral radii of
these equations are no less than one. Therefore, we shall introduce different, in general
biased estimators for solving integral equations whose Neumann series may be divergent.
This technique can be applied to general integral equations with completely continuous
operators.
(2.1)
r
or in t h e operator form:
φ = Κφ + / .
m
Here Γ is a bounded domain in R .
We suppose t h a t the integral operator Κ operates in a Banach space Λ"(Γ) of functions
integrable in t h e domain Γ. For example, if
1. {j\k{x,x')\Tdx'YlT < Cr
r
for almost all χ € Γ, r > 0,
2. { / I k ( x , x ' ) \ ° d x } l l ° <C2
r
for almost all χ' ζ Γ', σ > 0,
3. ρ > σ > ρ — r(p — 1),
then Κ is a completely continuous operator from £ Ρ ( Γ ) to £>Ρ(Γ) with the norm (Kan-
torowitsch and Akilow, 1964):
2.1. Conventional Monte Carlo scheme 9
_ b(x,x')
K{X, X ) — . ,
where b(x,x') is a bounded continuous function, η < m. If, in addition, ρ > m/(m — n),
then Κ : L"(T) C(r).
We now define a Markov chain in the phase space Γ as a sequence of random points
Y = {Vb, Yi, • • •, Yn, • • ·}> where the initial point Y0 is distributed in Γ with an initial
density po(x), and the next points are determined from the transitional density
Here r ( V í _ i , y í ) is a conditional distribution density of Yi, Y¡~i fixed, and g(Y{-1) is the
probability that the chain terminates in the state V ^ j (i > 1). Thus, the chain may have
infinite or finite number Ν of states, depending on the value of g.
Qu = fix o)
Po(^o)
and
p{x,y) φ 0 for {y : k(y, χ) φ 0 } .
P r o p o s i t i o n 2 . 1 . Random variables
10 2. Random walk algorithms for solving integral equations
ii = QMYi), (2.3)
Κι φ = J\k(x,y)\tp(y)dy
r
Proof.
Let us define 6n = 0 if the chain terminates at and δη = 1 if xn-i xn happens.
We define
*=o
Hence
oo
ζ = J2AnQnh(xn).
n=0
Let us suppose for a while, that functions k(x',x),f and h are non-negative. Then we
can take the expectation termwise in the series:
ΣΚ = £ ί ; [ Δ η ρ η Α ( χ η ) ] .
n=0
E [ A n Q n h ( x n ) } = E Y o y „ E [ A n Q n h ( x n ) \ Y 0 , . . . , Y n ]
= Ε ( Υ ΰ κ „ ) [ ζ ? „ Α ( η ) ^ ( Δ η | Κ 0
E { Y o Y t l ) [ Q n h ( Y n ) l [ ( l - g { X n ) ) ]
k=0
k ( x k , X k + l )
= J d x 0 j d x n h ( x n ) p 0 ( x 0 ) JJr(xjt,it+1)
r
. ¡ Ü p ( x k , x k + i )
,k=O
n-1
„ /(*o)
P o ( x o )
Π(1
= J d x o · · • J d x n f { x o ) h ( x n ) , X k + i )
Γ Γ k = o
= ( K n f , h ) .
= P ( 6 0 = δ 1 = . . . = 6 n = 1|K0, - · - , Y n ) = Π Ι 1 - $ ( * * ) ] •
k=0
\ V m \ = \ j r A n Q n h ( Y n ) \ < ¿ \ A n Q n h ( Y n ) \ =
n=0 n=0
m
and
Λ1)
But
h
Εηη = &kQkh(Yk) = £ E(AkQkh(Yk)) = J2(K f , h).
k=0 k=0 k=0
Thus
η
Εξ = l i m Εηη = Υ χ κ ΐ , h) = h,
m—fOft *
\\K"0\\<\\K?\\<1,
n . KYo)
Qo
~Po(YoY
Q . _ 0 . fc(K-i.K) .> ,
and
C = Q'iM),
Ν
ν = Σ < ί * Μ ) ·
i=0
( K ' f , h )= (f,K*'h),
and
2.1. Conventional Monte Carlo scheme 13
where ψ* = Κ*φ* + h,
Κ'φ·(χ) = j k{y,x)ip'{y)dy,
r
and
EC = Εξ = h. (2.8)
Ν
íp* = h(x)+Ej2Qnh(Yn), (2.9)
n=l
oo
n=0
Ν
φ = ί + (2.10)
π=0
Βξ = (χ,[2φ'-Η})-Ιΐ (2.11)
Γ
- 0
Thus, if HA " 11 < 1 for some integer number n0 and the variance of the direct estimator
for the integral equation with the kernel is finite, then the variance Ώ ξ is finite too.
Remark 2.1.
Let us consider a system of integral equations
Here K[x,y] is the matrix of kernels {Α\,}^ = 1 ,Φ = (φι,... ,ips)T, and P[x,y\ is a
given matrix.
It is not difficult to extend the estimators ξ* for the iterations (P,K-'[ }F) where
£ [ ] is the matrix-integral operator generated by the matrix K[x, y] :
IC[}F = J K[x,y]F(y)dy.
r
Indeed, let us choose a homogeneous Markov chain \Y0, F i , . . . } defined by the initial
density po and the transitional density p(x, y), such that
Po(y) Φ 0 for y :
and
2.2. Biased estimators 15
= KhuVi-Mi-i Qo = F(y„) ^
p{y.-uVi) ' Po(yo)'
K[y,-, P[x,y0]
Q'i = QU Q'0
phi-uy,)' Po(yo) '
2.2. B i a s e d estimators
Let Λ φ 0 be a non-singular value (then there exists a unique solution to (2.13)). Then
the operator R\ defined from
I f | A a | < l , then the Neumann series diverges at λ» = 1 and the conventional scheme
described in Section 2.1 fails. However, the function R\f is analytic in |A| < |Ai| and can
be analytically prolonged to all the regular values of A, and, in particular, to the point
A, = 1. We use the conformai mapping of the parameter A to carry out this continuation
(see e.g., Kantorovich and Krylov, 1964).
on D. Then
η. = φ-\\.) e Δ,
and φ " ( λ * ) 0 Δ .
1
oo
where
Ρ(η) = Φ(η)ϋφ{η)1 :
OO
k
(2.15)
k=ι ¿=1
is absolutely and uniformly convergent.
Let
7/0 = min '(Ai)! > ! ·
η
ψ(χ) = f(x)+ Σ i ^ K ' f ( x ) + φ ; χ), (2.16)
Î=1
where
(2.17)
| e ( n ; x ) | < const
Vo
We assume that the region D and the mapping φ(η) can be chosen so that all the
coefficients and the value η, can be calculated with the desired accuracy.
To evaluate Ih, we use the unbiased estimators in (2.3) or ξ* in (2.6). Then we can
introduce the random estimators
i'=0
(2.18)
c(n) = j2l<in)Q'f(y,)·
ι=0
Ih = Ei(n) + 6u
(2.19)
h = EC(n) + ¿2,
where
¿i = (e(n;-),A), S2 = ( / , e*(n; ·)),
[ ]
lo = 1, £(Κ;) = 0 for i < n, and
</(K) = 1-
We call ζ(η) a direct, and Í'{n)— an adjoint ¿—estimator for
where
k
Vk{x) = ΣΦκ'Η*)· (2.21)
!= 1
Introduce the notation:
(2.22)
k=1
Our goal is to study the iterative process for <¿>'m' generated by the linear-fractional
transformation (α, β are complex parameters):
OCT]
λ = (2.23)
1-/V
mapping the unit disk Δ on the half-plane:
£» = {A : SR |μ-Λ0| <o}
= {λ:|λ|<|/?| λ +
α\β\
λ0 =
ß(l + \ß\Y
In this case,
η, = (α + β)~1.
oo
¥>(*) = / ( * ) + Σ / f j / i o o , (2.24)
i=l
where
2.3. Linear-fractional transformations and relations to iterative processes 19
(2.25)
Consequently, we get:
a (2.26)
φΙ°\χ) = f(x) + - ^ K f ( x ) .
Thus (2.24) shows that we can pass to a new integral equation with the integral operator
K\ in (2.25) and the right-hand side / j . Assume that \{K) £ D. T h e n the series in (2.24)
converges. T h e respective unbiased estimators for Ih are:
(2.27)
Remark 2.2.
Note that we could derive the iterative process by the following transformation of the
original integral equation. Taking α φ 0 we get:
φ = [(1 — α ) / + αΙ\}φ + af
φΜ(χ) φ )
φ(0){χ) = äTß f { x ) •
•
Note also that the domain of convergence of the iterative process (2.26) is broader
than D. Let, for example, β = 1, a > 0. Then D = {A : > -α/2}.
The Neumann series (2.24) converges, if
\Φ~\\Ιι)\>\η·\, k = 1,2,...,
where {A*.} are the characteristic numbers of K. Let λ = χ + iy, then we rewrite this
condition in the form:
We consider now the transformation λ = α/( 1 — βη). It maps the disk Δ on the
half-plane:
£> = {λ:|λ-α|<|/ϊ||Α|},
and φ(0) φ 0,
f ^ ' Q í L / , η* = (1-α)/β
which corresponds to the iterations of the resolvent operator:
V» = (7 — a A ' ) - 1 / + £ [ ( 1 - a ) ( l - a A T ^ U - αΚ)~ιαΚί
i'=l
Parameter a is chosen so that the operator (I — aK)~l exists. In this case we can use
the transformation: ( a / 1) :
1-βη
has the form:
2.3. Linear-fractional transformations and relations to iterative processes 21
φ = (1 - a ) ( I - a f O ~ V m - 1 ) + (I - c t K y ' a f , (2.28)
or
V3(m) = αΙ<φ{πι) + (1 - α ) ^ " 1 - 1 ' + α / .
(-¿J'
In this case the Neumann series for the resolvent operator (1 — a)(I — aK) 1
converges.
Let us consider now the iterative process (2.26), (2.28) for fixed m, N. In this case, it is
possible to change the order of summation, hence the approximate solution is represented
as an operator polynomial (of finite degree) applied to the right-hand side of the equation.
Thus, for the series (2.25) we get:
Ν
φ = Σα,(Ν)Κί/ + ε(Ν), (2.29)
ι'=0
where
N-l
a0(N) = 1, ai(N) = α, £ C ^ U - <*Ϋ~ί+\ (2-30)
Jt=;-i
α= Ν fixed.
For the series (2.31):
a,i(N)
K
— a'+1 — a)4-1, { N
~ m
~ 1;
' ¿-1 ' \N >m,i> Ν - m + lJ
and
t-f-m—1 »r V
{,^ζΓ+ι}· (2 31)
·
If we take in the expansion of [(1 — a)(I — aK)~1]n a finite number of terms, then the
iterative process (2.28) leads to the expressions (2.31) for the coefficients in (2.29).
However, for some particular cases of Κ it is possible to construct a non-biased esti-
mator for the kernel of the operator R, defined by
This permits to construct estimators for φ, using the double randomization and substi-
tuting into the weights not the exact values of the kernel (I — aK)'1 but its unbiased
estimator. This estimator has a finite variance, if the order of singularity of the kernel of
Κ is less than m/2, m being the dimension of the problem.
We consider a simple example which will be used later on to construct the walk
on boundary algorithms. Assume that all the characteristic values are real, and Λ * €
22 2. Random walk algorithms for solving integral equations
(—00, —a), a > 0. Without loss of generality, we assume that it is desired to solve (2.1)
at λ = λ» = 1. Then it is convenient to choose the mapping of the disk Δ on the domain
Da the complex plane with a cut along the real axis from —a to —00 :
λ
= ^ ) = ( Γ Γ ^ · (2.32)
Then the series R^(r))f converges absolutely and uniformly on Δ , and the coefficients can
be accurately calculated:
4n) = M ' C ^ .
To construct a biased random estimator we choose m so that the remainder of the series
is equal to a desired quantity ε. Then
771 τη k τη
b k c
r u = Σ *i . = Σ ^ Σ Λ = Σ 4m). (2·33)
(fc=l k=1 1=1 k=1
where m
ί - ^ Σ ί ' ΐ · (2-34)
n=k
The coefficients are calculated in advance, according to (2.34), and the integrals
Ck = K k + 1 f are calculated by Monte Carlo methods. Let be an unbiased Monte Carlo
estimator for c/t_i, then an ε—biased estimator for the solution to the integral equation
has the form
m
m,
cí (*) = / ( * ) + Σ (2-35)
k=1
Theorem 2.1.
In the case of the transformation (2.32), I^ < 1 for all k and, m. If variance < σ2,
then the cost of the estimator ζneeded to achieve the error of order ε has the following
order for ε —* 0.
Τε = 0 ( | 1η(ε)| 3 /ε 2 ). (2.36)
Proof.
It is sufficient to show that l ^ < 1. Note that
Theorem 2.2.
Assume that the conformai mapping X = φ(η) has only simple poles on the bound
|?/| = 1. Then the cost of the estimator (2.35) is of order (\\η(ε)\4/ε2) if D(k) = D£k < σ2
and if
Proof.
The estimation |a,| < c can be obtained from the representation (Polya and Szego,
1964):
oo k oo
Therefore
1^)1 ^ r q b -
Using the Cauchy inequality we obtain
k kl
ι4η)ι<·
consequently,
m ¿i
Thus,
M 2 2 2
The last inequality proves the theorem because m = 0(1η |ε|) due to the fact that the
series (2.20) converges as a power series. •
Theorem 2.3.
Assume that a regular function X — φβ{ν) = αλη + α 2 η 2 + ... maps the disk |r?| < 1 on
a convex domain. Then the condition (2.37) takes the form
24 2. Random walk algorithms for solving integral equations
Proof.
Remark 2.3.
It is only required that the Neumann series for the integral equation with the kernel
|&i(a:,î/)| converges.
For the sake of simplicity consider a system of linear algebraic equations, i.e., (2.38)
with the f o r m χ = A x + 6, or in η coordinates
2-4- Asymptotically unbiased estimators based on singular approximation of the kernel 25
3=1
4. ' Ι (2·41)
where d%] = α ! ; — α,]0η} are the entries of a matrix D,a¡]0[i — 1 , . . . , n ] is a fixed column
of the matrix Α,ητ = ( 7 1 , . . . ,ηη) is an arbitrary vector.
Theorem 2.4.
Assume that the systems (2.40), (2.41) are uniquely solvable and
η
ii,y) = ΣΊ,ί>· ¿ 1·
3=1
Then
Σ " = ι "tizj
x, = z, + y,~ ψ^π , 1 = 1 , . . . , η. (2.42)
1 - L j = i lit)i
Proof.
Let Τ = (γ, χ). We prove first that x¡ = Zi + j/,Γ, i — 1 , . . . , η. Indeed,
J=\ 3=1
η
(2.43)
= 6, + aikT + Σ + y^·
3=1
η
Xi = diixj + bi + a.ijaT,
3=1
Xi = Zi + Vi Σ "Hx> = zi + Vi Σ + y>Tì
3=1 j=l
η
= Zi + Vi Σ Ίίζί + ViT{l,y) = Zi + Vi{f, z) + (Xi -Zi)(i,y)
i=ι
= zì(1 - (7, y)) + yi(7, z) + x¡(7, y),
Xi = + Ì—7 \
ι - (71 y)
which proves the theorem.
T h e equality (2.42) will be useful if the solution of (2.41) is somehow simpler than the
construction of the solution of the original equation. In particular, the situation when
g(A) > 1 but g(D) < 1 is interesting from the point of view of Monte Carlo methods
because in this case it is possible to construct estimators for «,·, j/,·, (7, z) and (7, y) on a
single Markov chain simultaneously.
Let us consider now the case when the kernel has the form (2.39). Then the auxiliary
equations are
Theorem 2.5.
Assume that the equations (2.44) are uniquely solvable for arbitrary right-hand sides
from a class of functions, and suppose that
( 7 , ¥>2) = J i{y)<P2{y)dy Φ 1.
G
Then
φ{χ) = φι(χ) + ψ2{χ) _ ^ ^ · (2.45)
Proof.
T h e proof is analogous to that of Theorem 2.4. Indeed, substituting (2.39) into (2.38)
yields
2.4· Asymptotically unbiased estimators based on singular approximation of the kernel 27
+ J h{x,y)Wi{y) + ψ2{y)J}dy.
Taking into account (2.46) we obtain the desired equality. Like in Theorem 2.4, we use it
in the following transformations:
Φ) = ξι(χ) + 6(g)
1 ~ ^2 y
1 m
= l = 1 >2 (2-47)
m k=1
is an asymptotically unbiased estimator for φ(χ). Indeed, let us assume that ξ}^, 1 —
i — 1 , 2 , . . . ,m are independent realizations of ξ\Ί, 1 — £2.,, respectively, constructed
28 2. Random walk algorithms for solving integral equations
on a Markov chain. Then by the well known theorem for the distribution of a function of
random variables (Rao, 1960) the random quantity
is asymptotically gaussian with the mean (7, <¿>i)/[l — (71^2)] and the variance
(7.V1) 2 (1~6T)] ,
1 Í -
- (7, ¥>2). 1 ( 7 ,,ψι) 2 "(7,¥>i)[l - (7, ¥>2)] [1 - (7,V2)] 2 .
Analogous arguments can be used for the random variable
[1 — £2™^], hence, η is an asymptotically unbiased estimator for φ(χ).
The approach described above can be generalized to systems of integral equations.
Systems of linear algebric equations can also be treated. Let us consider a system of
integral equations
or in more detail,
Μ φ , ν ) = ktj{x,y) - 6\χ)η\ν\ (2.49)
where ¿(x) and 7(y) are some arbitrary column-vectors with components ¿ ' ( x ) , . . . , Sn(x)
and 7 1 ( y ) , . . . , 7 m ( y ) , respectively.
We introduce two auxiliary systems of itegral equations
(2.50)
ΦΙ{Χ) = J M[x,y^(y)dy + S(x).
= j ΊI(yWAy)dy
Theorem 2.6.
Assume that the system (2.52) is uniquely solvable and there exists an operator (I —
Λ) - 1 . Then the solution to (2.48) is represented as
φ{χ) = φ0 + Φ Τ { χ ) J ,
where the vector J is determined from the system of linear algebraic equations
J = AJ + b. (2.53)
Proof.
Substituting K[x,y] from (2.51) into (2.48) yields
where
30 2. Random walk algorithms for solving integral equations
Ji Ξ =
/ / ¿TifoVWy·
Now,
η Μ η
1=1 ^ ι=1
η
(2.55)
+ J26,(x)J, + f(x).
ΐ=1
Note that
Γ
} <ί
/ 7ï(y)v(y)dy= / iï{y){Φο{ν)^^2,ΦΑν)'
j=l ί] ν^
or
+ J lï{yWo{y)dy,
Note that the approach discribed can be applied also to a system of linear equations
χ = Ax + b where A is an m χ m matrix.
We introduce a matrix
Β = A - a , ß j - ... - anß*,
where a \ , . . . , a n and βι,..., βη are arbitrary column vectors, i.e., the matrix Β is obtained
from A by substraction of singular matrices of the form (n < m)
(«\ßl a)ß]
afßi «Ißf ·· <*ißi
ctißf =
\ Pi <*i ßi /
Consider η + 1 auxiliary linear systems
2-4- Asymptotically unbiased estimators based on singular approximation of the kernel 31
x0 = Bx o + 6,
Χι = Bx 1 + C*1,
xn = Bxn + an.
η
Χ = Xq "1" ^ ^ JjXj,
3=1
where ji,... Jm are components of the vector J which satisfies the equation J = Τ J + t.
Here is the matrix with elements T,j = ßfxy, t is a vector with components ti = ßfxo-
η
K(x, y) = M(x, + Σ a<(x)ß>(y)·
ι'=1
Assuming that these equations are uniquely solvable, we derive, as previously, that
η
φ{χ) = φ0{χ) +
ι=1
J = Τ J + 6,
Τ = J ai(y)ßi(y)dy,
=
A V = /,
φ = (I — κΚ)φ + lif
It is known that
η
Ψη = - κΚγκί - φ (2.57)
ì=0
as η —* οο, if
W e represent φ η as a polinomial of Κ :
1=0
K'f = EQ"f(Yi), Q« = l.
Consequently,
η
V{n-X) = Cn+\(-KYQif{Yi) (2.58)
!=0
is an unbiased estimator of φ η :
φη(χ) = Εξ·{η;χ).
where λ] < λ 2 < . . . are the characteristic numbers of K, and / i , / 2 , · · · are the eigen-
functions.
Suppose now that it is desired to calculate a linear functional 7/¡ = (φ, h) of the solution
φ. Then
{φ η ,Κ) = Ε ί η
where
ζη
\ po(YO) y
We can find the variance of this estimator:
n+l n+l
DQ(n,x) = + 2 £ C^i-KY^d,,,
i=i j>i=ι
where
•ι,, - ( i q - ' W ' - ' f ì X )- (K'-'f,h)(K'-'j,kì,
J
Γ
From this we get:
Όζ*(η\χ) < maxd,j(l + κ)2η. (2.60)
Let us consider now a different approach based on the iteration of the resolvent oper-
ator. We choose κ so that ( I + κΚ)~ ι exists. Then we can rewrite (2.56) in the form:
φ = (I + / c A T V + « ( / + « A T 1 / ·
If κ > 0, then
n+l
V» = + «A"]" V V (2·61)
;=i
as η —* oo.
We assume now that
The kernel r(x,y) of the operator R satisfies the equation (y is considered as a param-
eter):
r
{x,y) = -nKr(x,y) + k(x,y), χ € Γ
Ν
C ( * , y ) = X > * . i W f c , y)· (2-62)
¡'=0
k(X^,Xj)
Qo,!-1' Ql = Qlu Kl(Xi-uX,) '
where
η
ξ'(η; χ) = ηκ/(χ) + κ £ C^l(-K)kQ'kJ(Yk), (2.63)
¿fc=i
Ν terms. We then substitute this polinomial into (2.61) and arive to the estimator:
Ν
Γ ( η ; χ) = (η + 1 ) « / + « Σ ( - ι Ο * < # Ϊ Φ / ( η ) . (2.64)
¡fc=i
Χι = —κΚχι + « / ,
Χι = X . - i - n K x i , i = 1 , 2 , . . . , n,
where χ = ( χ ι , . . . χ „ ) .
τ
We choose κ so that the spectral radius ρ{κΙ() is less than 1. Then the Neumann
series for Κ is convergent, and we can use the approximation
η
ψη-\ = Y^Xi-
1=1
Ψη
ι=1 " jfc=0
where the subscript "¿" shows that the i—th component of the vector is taken. Since n,
and Ν are finite, get (Ν < τι — 1) :
Ν
= Σ(' - V-
k=0
It is reasonable to coordinate the statistical error and the bias of ξ". This ensures the
consistency when calculating φ(χ) by averaging over TV independent samples of ξ". Let
Then η = In ej In q + C\,
Ν =
Assume that Am = O(m) (e.g., this is true in the case of the simple layer potential, if Γ
is convex). Then η = const/ε, and
R a n d o m Walk on Boundary
algorithms for solving the Laplace
equation
Au(x) = 0, χ e G,
where function u is defined in some region G of the Euclidean space Rm and has continuous
derivatives of at least second order.
Let G be a bounded simple connected domain with a simple connected boundary
Γ = dG. We denote G\ = Rm \ G, G = G U Γ. From here on m is considered to be greater
than or equal to 3, that makes it possible to write down the general expressions of the
formulas, but it must be mentioned that the two-dimensional case can be treated in the
same way.
We pass on to the definition of the surface potentials now. Suppose at first that Γ is
a smooth Lyapunov surface (Gunter, 1957). It means that:
1) at every point y € Γ there exists the definite normal vector n{y)\
2) this vector, considered as a function of the point y on the surface Γ, is Holder continuous,
that is if χ, y € Γ, then
In(y) - n(z)| < A\x - y\a
for some particular constants A and a € (0,1];
3) there exists such constant d > 0 that if we consider the ball S(y,d) with the centre
at some point y g Γ and the radius equal to d, then the straight lines parallel to n{y)
intersect Γ in S(y, d) only once.
Let σ be a standard surface measure on Γ and ß(y), y S Γ— some continuous function.
Then we can introduce the following functions (see for example (Vladimirov, 1981)):
function
3.1. Newton potentials and boundary integral equations of the electrostatics 37
il ^η\χ-ν\2"ημ(ν)άσ{ν)
J (m- 2)am dn(y)
r
(3.2)
2 cos φ ν χ
-μ(ν)άσ(ν) Ξ \ν[μ}(χ)
/ crm \χ — y\
Γ
2tt?
r ( f )
Theorem 3.1.
If Γ is Lyapunov surface and μ is continuous function then
1) V E C(Rm) (and is continuous on Γ consequently);
2) there exist regular normal derivatives ^gf^y^ and on Γ and
dV \ . , s dV
± μ [ χ ] +
(dn(x))± ' dn(x)
We remind that function u(x) is said to have regular normal derivative on Γ if uniformly
in all χ 6 Γ there exists
that is (·) + denotes the limit from the exterior and (·)_ denotes the limit from the interior
Exploring the Variety of Random
Documents with Different Content
Jeune, j'aimai; le temps de mon bel âge,
Ce temps si court, l'amour seul le remplit.
Quand j'atteignis la saison d'être sage,
Encor j'aimai; la raison me le dit.
Me voici vieille, et le plaisir s'envole;
Mais le bonheur ne me quitte aujourd'hui,
Car j'aime encor, et l'amour me console:
Rien ne saurait me consoler de lui.
Tendresse maternelle
Toujours se renouvelle.
Rien ne manque au cœur d'une mère, à ce chef-d'œuvre de
l'amour. C'est une source de tendresse qui se renouvelle
continuellement sans jamais s'épuiser, qui semble s'accroître, au lieu
de diminuer par l'excessive effusion de sa substance. Qui pourrait
dire les trésors de sentiment qui en découlent! «O ma mère, s'écrie
un fils dans une pièce de poésie chinoise, vos bras furent mon
premier berceau. J'y trouvai vos mamelles pour m'allaiter, vos
vêtements pour me couvrir, votre sein pour me réchauffer, vos
baisers pour me consoler, et vos caresses pour me réjouir.»
Mais ses bienfaits ne s'épanchent pas seulement sur le jeune
âge. La nature n'a point limité chez la femme, comme elle l'a fait
chez les femelles des animaux, l'énergie de l'amour maternel au
temps où l'enfant ne peut se passer des soins de celle qui l'a mis au
monde; elle a voulu, par un privilége exceptionnel en l'honneur de la
dignité humaine, que cet amour subsistât inaltérable dans le cœur
qui en est animé par delà les besoins de l'objet qui l'inspire. Il ne
s'interrompt point, il ne perd rien de sa force en s'étendant à de
nouveaux enfants; il se multiplie avec eux, il l'emporte sur toute
autre affection. Les années ne l'usent point, il est de tous les jours et
de tous les instants de la vie.
(A. de Latour.)
Les Allemands disent: «Mutterlieb ist immer neu. Amour de mère
est toujours nouveau.» Ce proverbe a été développé d'une manière
pleine d'intérêt dans une collection de jolies gravures faites d'après
les dessins originaux de M. J.-Martin Ustéri. Les explications placées
à côté de chaque estampe ajoutent au prix de cette collection,
éditée à Zurich en 1803, et devenue le sujet d'un petit roman
sentimental publié depuis à Paris.
(A. de Musset.)
Un proverbe roman dit: «Non es bel so qu'es bel, mas es bel so
qu'agrada. N'est pas beau ce qui est beau, mais est beau ce qui
agrée.» Ce proverbe s'est conservé en Provence et en Italie.
C'est Diane Limnatis, déesse des marais et des étangs, dont il est
ici question. Cette remarque n'est pas inutile pour faire sentir
l'analogie d'un tel rapprochement.
Les habitants de l'île de Cypre avaient érigé des autels à Vénus
Barbue. Les Romains adoraient Vénus Louche, comme on le voit
dans le second livre de l'Art d'aimer d'Ovide, et dans le Festin de
Trimalcion par Pétrone. Ils employaient même proverbialement
l'hémistiche d'Ovide: «Si pæta est, Veneri similis. Si elle est louche,
elle ressemble à Vénus,» en parlant d'une belle qui avait le rayon du
regard un peu faussé. Horace nous apprend qu'un certain Balbinus
trouvait une grâce particulière dans le polype qu'Agna sa maîtresse
avait au nez. Il observe que les amants ressemblent à Balbinus
(Serm. I, 3). Il n'en est aucun en effet qui n'aime, comme on dit,
jusqu'aux taches et aux verrues de sa belle.
Le meilleur développement du proverbe Il n'y a point de laides
amours est dans les vers suivants, tirés de la traduction libre que
Molière avait faite de Lucrèce, et placés dans la cinquième scène du
second acte du Misanthrope:
Amoureux transi.
Cette expression, dont on se sert pour désigner un amoureux
timide, novice, froid, fait allusion à un ancien usage des justiciables
volontaires de certaines cours d'amour, espèces d'énergumènes qui
avaient fondé, sous le règne de Philippe V, une société ou confrérie
nommée la ligue des amants, dont l'objet était de prouver l'excès de
leur passion par une opiniâtreté invincible à braver les ardeurs de
l'été et les glaces de l'hiver. Dans les chaleurs extrêmes, ils
allumaient de grands feux pour se chauffer et ils ne sortaient de
chez eux qu'enveloppés d'épaisses fourrures; au contraire, quand il
gelait à pierre fendre, ils se couvraient très-légèrement et allaient
par le froid, par la neige ou par la pluie, soupirer à la porte de leurs
maîtresses, où ils se tenaient jusqu'à ce qu'ils les eussent aperçues,
étant parfois tellement morfondus et transis dans l'attente, dit un
vieux chroniqueur, qu'on entendait claquer leurs dents comme les
becs des cigognes: la crainte des catarrhes et des fluxions de
poitrine n'était rien pour eux auprès du plaisir qu'ils paraissaient
prendre à baiser la serrure ou le verrou de cette porte. Outre ces
témoignages de leur vasselage amoureux, ils avaient pour se
distinguer certaines devises et certaines démonstrations d'une
singularité extraordinaire. Tel confrère élisait son domicile à
l'enseigne de la Passion, rue du Sacrifice, paroisse de la Sincérité; tel
autre demeurait sur la place de la Persévérance, hôtel de l'Assiduité,
etc., etc.
Il existe un ouvrage rare et curieux intitulé l'Amoureux transy
sans espoir, par Jehan Bouchet. Cet ouvrage ne porte point de date.
Selon toute apparence, il a paru vers 1505, et par conséquent il est
postérieur à la locution qui en forme le titre.
C'est un Céladon.
Amoureux à beaux sentiments. Céladon est un personnage de
l'Astrée, pastorale allégorique où son auteur, le marquis Honoré
d'Urfé, homme célèbre dans le monde galant par sa beauté, sa
grâce, son esprit et son tendre cœur, a décrit ses propres amours,
dégagés de toute idée grossière. La scène de ce roman est placée
sur les bords du Lignon, petite rivière du Forez. Les bergers et les
bergères qui y figurent sont des portraits de grands seigneurs et de
grandes dames de la cour de France. Astrée représente Mlle de
Chateaumorand; Galathée, la reine Marguerite, sœur de Henri III;
Céladon, c'est d'Urfé; Calidon, M. le prince; Calidée, madame la
princesse; Euric, Henri le Grand. Le premier volume de l'Astrée parut
en 1610, quelque temps avant l'assassinat de Henri IV, et fut dédié à
ce roi, qui trouva le présent fort agréable, quoique l'auteur ne le lui
fût guère à cause de ses amours avec Marguerite de Valois. Le
second et le troisième volume furent publiés l'année suivante, le
quatrième en 1620, et le cinquième en 1625, après la mort de
d'Urfé, par les soins de son secrétaire Baro, qui le termina d'après
les manuscrits de son maître ou d'après sa propre imagination. Ces
publications successives, signalées par divers bibliographes à qui j'ai
emprunté les détails qu'on vient de lire, furent accueillies avec la
plus grande faveur.
Ajoutons un fait qui montre bien l'influence extraordinaire que
d'Urfé, par son roman, exerça sur ses contemporains. On assure
qu'en 1624 il reçut, en Piémont où il résidait, une lettre signée de
vingt-neuf princes ou princesses, et de dix-neuf seigneurs ou dames
d'Allemagne qui lui demandaient avec instance la fin de l'ouvrage.
Ces personnages l'informaient qu'ils avaient pris les noms des héros
et des héroïnes de l'Astrée, et qu'ils s'étaient constitués en académie
des vrais amants.
C'est de ces confréries pastorales, qui remontent à une époque
beaucoup plus ancienne, que sont dérivés les noms de berger et de
bergère employés comme synonymes d'amant et d'amante.
L'Amour et le Médecin.
1er COUPLET
2e COUPLET
3e COUPLET
4e COUPLET
5e COUPLET
6e COUPLET
LE MARIAGE
1,360 Femmes qui ont quitté leurs maris pour suivre leurs
amants.
2,361 Maris qui se sont enfuis pour ne plus vivre avec leurs
femmes.
4,120 Couples séparés volontairement.
191,025 Couples vivant en guerre sous le même toit.
162,320 Couples qui se haïssent cordialement, mais qui cachent leur
haine sous un extérieur poli.
510,132 Couples qui vivent dans une indifférence marquée.
1,102 Couples réputés heureux dans le monde, et privés, dans
leur intérieur, du bonheur qu'on leur suppose.
135 Couples heureux par comparaison à la grande quantité des
malheureux.
9 Couples véritablement heureux.