Classification and Overview of Meshfree Methods
Classification and Overview of Meshfree Methods
en
Comp
tifi
Sci
uti
ng
July, 2004
(revised)
Location
Institute of Scientific Computing
Technical University Braunschweig
Hans-Sommer-Strasse 65
D-38106 Braunschweig
Contact
Phone:
Fax:
EMail:
www:
Postal Address
Institut f
ur Wissenschaftliches Rechnen
Technische Universit
at Braunschweig
D-38092 Braunschweig
Germany
+49-(0)531-391-3000
+49-(0)531-391-3003
[email protected]
http://www.tu-bs.de/institute/WiR
c by Institut f
Copyright
ur Wissenschaftliches Rechnen, Technische Universit
at Braunschweig
This work is subject to copyright. All rights are reserved, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data
banks. Duplication of this publication or parts thereof is permitted in connection with reviews
or scholarly analysis. Permission for use must always be obtained from the copyright holder.
Alle Rechte vorbehalten, auch das des auszugsweisen Nachdrucks, der auszugsweisen oder
vollst
andigen Wiedergabe (Photographie, Mikroskopie), der Speicherung in Datenverarbei
tungsanlagen und das der Ubersetzung.
CONTENTS
Contents
1 Introduction
2 Preliminaries
2.1
Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2
Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Abstract
2.3
11
2.4
Complete Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
2.5
13
14
15
4.1
Mesh-based Construction . . . . . . . . . . . . . . . . . . . . . .
17
4.2
18
4.2.1
18
4.2.2
22
4.2.3
26
4.2.4
28
4.2.5
28
4.2.6
30
32
4.3.1
Hermite RKPM . . . . . . . . . . . . . . . . . . . . . . . .
40
4.4
Particle Placement . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.5
Weighting Functions . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.6
47
4.7
49
4.3
CONTENTS
CONTENTS
51
6.2.3
6.3
91
91
5.1
52
5.2
59
6.3.1
93
5.3
61
6.3.2
95
5.4
63
6.3.3
98
5.5
63
6.3.4
5.6
66
5.7
68
6.4.1
5.8
hp-clouds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
6.4.2
5.9
73
6.4.3
74
6.4.4
75
6.5
h-Adaptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.12 Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
6.6
Parallelization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
77
6.7
6.8
6 Related Problems
6.1
6.2
6.4
. . . . . . . . . . . . . . . . . . .
Discontinuities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
79
79
6.1.1
Lagrangian Multipliers . . . . . . . . . . . . . . . . . . . .
79
6.1.2
80
6.1.3
Penalty Approach . . . . . . . . . . . . . . . . . . . . . .
80
6.1.4
Nitsches Method . . . . . . . . . . . . . . . . . . . . . . .
81
6.1.5
81
6.1.6
Transformation Method . . . . . . . . . . . . . . . . . . .
82
6.1.7
83
6.1.8
PUM Ideas . . . . . . . . . . . . . . . . . . . . . . . . . .
84
6.1.9
Boundary Collocation . . . . . . . . . . . . . . . . . . . .
84
85
Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
6.2.1
89
6.2.2
89
7 Conclusion
109
References
111
Introduction
Introduction
Introduction
the number of integration points for a sufficiently accurate evaluation of the integrals of the weak form is considerably larger in MMs
than in mesh-based methods. In collocation MMs no integration is
required, however, this advantage is often compensated by evoking
accuracy and stability problems.
the choice of the test function in the weighted residual procedure which
might lead to collocation procedures, Bubnov-Galerkin methods etc.
the choice of the approximation which can use an intrinsic basis only or
add an extrinsic basis
Then, MMs based on the usage of an additional extrinsic basis are described.
These methods allow to increase the order of consistency of an existing partition
Outline of the Paper The references given in the following outline are restricted to only a few important publications; later on, in the individual subsections, a number of more references are mentioned. The paper is organized as
follows: Section 2 aims to introduce abbreviations and some important mathematical terms which then will be used frequently in the rest of the paper.
In section 3 we propose a classification of MMs. According to this classification the MMs fall clearly into certain categories and their differences and
relations can be seen. We do not want to overemphasize the meshfree aspect
although being the main issue of this paper, because some methods can
also employ mesh-based interpolations e.g. from the FEM. We classify MMs
according to
Introduction
Preliminaries
2.1
Nomenclature
Throughout this paper we use normal Latin or Greek letters for functions and
scalars. Bold small letters are in general used for vectors and bold capital letters
for matrices. The following table gives a list of all frequently used variables and
their meaning.
symbol
u
uh
x
xi
N
w
p
a
M
i
d
n
k
N
2.2
meaning
function
approximated function
space coordinate
position of a node (=particle, point)
shape (=trial, ansatz) functions
test functions
FEM shape function (if difference is important)
weighting (=window, kernel) function
intrinsic or extrinsic basis
vector of unknown coefficients
moment matrix
multi-index
vector in the multi-index set { ||| c, c }
dimension
order of consistency
size of a complete basis
total number of nodes (=particles, points)
dilatation parameter (=smoothing length)
Abbreviations
APET:
BC:
Boundary Condition
BEM:
BIE:
CSPH:
10
Preliminaries
2.3
DEM:
PU:
Partition of Unity
DOI:
Domain Of Influence
PUM:
EFG:
PUFEM:
EBC:
RKM:
FEM:
RKEM
FDM:
RKPM:
FLS:
Fixed Least-Squares
SPH:
FVM:
FPM:
GFEM:
GMLS:
LBIE:
LSQ:
Standard Least-Squares
MFEM:
MFLS:
MFS:
MLPG:
MLS:
Moving Least-Squares
MLSPH:
MLSRK:
MM:
Meshfree Method
NEM:
PDE:
PN:
Partition of Nullity
2.3
11
The aim is to solve partial differential equations (PDEs) numerically, i.e. we are
interested in finding the functions u that fulfill the PDE Lu = f where L is any
differential operator and f is the systems right hand side.
One of the most general techniques for doing this is the weighted residual
method. Conventional methods like the Finite Element Method (FEM) are the
most popular mesh-based representatives of this method and also the Finite Difference Method (FDM) and the Finite Volume Method (FVM) can be deduced
with help of the weighted residual method as the starting point. All Meshfree Methods (MMs) can also be seen as certain realizations of the weighted
least-squares idea.
In this method an approximation of the unknown field variables u is made
in summation expressions of trial functions (also called shape or ansatz
P func =
, hence u uh = T u
bi .
tions) and unknown nodal parameters u
i i u
Replacing u with uh in the PDE gives Luh f = . As it is in general not
possible to fulfill the original PDE exactly with the approximation a residual
error is introduced. Test functions are chosen and the system of equations
is determined
by setting
the residual
error orthogonal to this set of test func
R
R
tions, d = Luh f d = 0. The integral expressions of this weak
form of the PDE have to be evaluated with respect to and , and the given
boundary conditions have to be considered. The resulting system of equations
. Throughout this paper
A
u = b is to be solved for determining the unknowns u
.
we often write u instead of u
It should be mentioned that one makes often use of the divergence theorem during this procedure to modify the integral expressions in order to shift
conditions between the trial and test functions.
12
Preliminaries
2.5
13
In other contexts the test functions are sometimes also termed weighting
functions. But in the context of MMs one should strictly distinguish between
test and weighting functions because the term weighting function is already used
in a different context of MMs.
It can be seen that although is a vector, the set of all vectors with || n,
hence { : || n} can be considered a matrix. In the following i refers to
the -vector in the i-th line of the set { : || n}, whereas j stands for a
specific component of a certain vector i .
2.4
The relationship between the dimension d and the consistency n on the one
hand and the number of components in the basis vector on the other hand is
Complete Basis
k=
2.5
d
1 Y
(n + i) .
d! i=1
|| u (x)
D u (x) = 1
.
x1 2 x2 d xd
P
The length of is || = di=1 i .
With this notation we can easily define a polynomial basis of order n as
p (x) = {x ||| n } .
(2.1)
Some examples of complete bases in one and two dimensions (d = 1, 2) for first
and second order consistency (n = 1, 2) are
(0)
(0)
x
d=1
1
: { : || 1} =
p=
=
,
(1)
n=1
x
(1)
x (0)
x
1
(0)
d=1
(1)
= x ,
p = x(1)
: { : || 2} =
n=2
(2)
x2
(2)
x(0,0)
x
1
(0, 0)
d=2
(1, 0)
p = x(1,0) = x ,
: { : || 1} =
n=1
y
(0, 1)
x(0,1)
(0,0)
x
(0, 0)
1
x(1,0)
x
(1, 0)
(0,1)
(0, 1)
d=2
x
p = (2,0) = 2
: { : || 2} =
.
(2, 0)
n=2
x(1,1)
xy
(1, 1)
(0,2)
(0, 2)
y2
x
14
15
where p is the complete basis of Eq. 2.1. The derivative reproducing conditions
follow immediately as
X
D i (x) p (xi ) = D p (x) .
(2.3)
techniques for the construction of a PU of n-th order with the concept of the
complete (intrinsic) basis p (x) will be worked out in section 4.
Another way to show that the functions i are n-th order consistent is to
insert the terms of the approximation into the Taylor series and identify the
resulting error term, i.e. the term in the series which cannot be captured with
the approximation. This will be worked out later. In multi-index notation the
Taylor series may be written as
u (xi ) =
X
(xi x)
D u (x) .
||!
||=0
We present the classification of MMs already at this point of the paper rather
than at the end because it shall serve the reader as a guideline throughout
this paper; the aim is not to get lost by the large number of methods and aspects
in the meshfree numerical world. Therefore, we hope this to be advantageous
for newcomers to this area as well as for thus with a certain prior knowledge.
In this paper, we do not only restrict ourselves to the meshfree aspect
although being the major concern, as some methods can use either mesh-based
or meshfree PUs or even a combination of both via coupling. Therefore we focus
the overview in Fig. 1 on the PU of n-th order. The PU can be constructed either
with a mesh-based FEM procedure or the meshfree MLS or RKPM principle;
other possibilities are also mentioned in the figure and are discussed later. These
For a specific MM, these three properties are in general defined. Few methods
may occur in more than one case, e.g. the PUM can be used in a collocation or
Galerkin scheme. All the specific MMs resulting as certain realizations of the
three classifying aspects will be discussed in detail in section 5.
The grey part in Fig. 1 refers to alternative non-standard approaches to
construct PUs. Here, Sibson and non-Sibsonian interpolations being the starting point for the NEM (subsection 5.9) are mentioned for example. Also construction ideas which are a combination of meshfree and mesh-based approaches
such as thus resulting from coupling methods (subsection 6.3), MFEM functions
(subsection 5.10) and RKEM functions (subsection 5.11) belong to these alternative approaches. Calling these alternative approaches meshfree is not without
a certain conflict, in fact, they are strongly influenced by both meshfree and
mesh-based methods, and try to combine advantages of both methodologies.
In this section several ways are shown for the construction of a PU of order n.
For a certain completeness and comparison purposes we start by reviewing the
16
4.1
LBIE=MLPG 4
BEM
i = fund. sol.
MLPG 1
LSMM, MLPG 3
least squares
FEM
i = ...
Mesh-based Construction
i = w i
i = i
i =
~
MLPG 5
point collocation
(FDM)
subdomain coll.
(FVM)
Bubnov
Galerkin FEM
i = (x x i )
meshfree
meshbased
test function
u = i u i
intrinsic basis only
FEM
There are a few MMs that use different ways for the construction of a PU,
namely for example the RKEM and NEM. Also coupling methods which combine
meshfree and mesh-based ideas to construct a coupled PU may be considered
here. These ideas are discussed later on either in section 5 or 6.3. In this section
the focus is on the MLS and RKPM procedure which is the central aspect of
most MMs.
PU
MLS
generalized
MLS
hermite
FEM
meshbased
methods
MLSRK
17
Mesh-based Construction
i = (x x i )
or i = i
or any other
meshbased
meshfree
test function
choice of
approximation
"nonstandard" approaches:
Sibson, nonSibsonian fcts.
coupled meshfree/meshbased fcts.
RKEM fcts.
(additional enrichment fcts.)
RKPM
hermite
RKPM
RKM
meshfree
methods
construction of a PU
4.1
pk (xj ) aik = ij
T
p (x1 )
..
A = I
.
pT (xne )
T (x)
A = [P (xj )]1
=
18
The conditions for a PU do not have to be imposed directly, but will be satisfied
automatically for all functions i . All shape functions of the FEM build PUs of
a certain order (except some special cases, e.g. of p-enriched PUs with bubble
functions).
4.2
4.2
equations results:
Jx
a1
Jx
a2
Jx
ak
=0:
=0:
..
.
=0:
T
PN
i=1 w (x xi ) 2p1 (xi ) p (xi ) a (x) ui
PN
T
i=1 w (x xi ) 2p2 (xi ) p (xi ) a (x) ui
..
.
T
PN
w
(x
x
)
2p
(x
i
k
i ) p (xi ) a (x) ui
i=1
The MLS was introduced by Lancaster and Salkauskas in [82] for smoothing and
interpolating data. If a function u (x) defined on a domain <d is sufficiently
smooth, one can define a local approximation around a fixed point x :
i=1
4.2.1
N
X
i=1
N
X
i=1
2
w (x xi ) Lx u (x) ul (x, x)
2
w (x xi ) pT (xi ) a (x) ui .
xi refers to the position of the N nodes within the domain which is discussed
separately in subsection 4.4. The weighting function w (x xi ) plays an important role in the context of MMs which is worked out in subsection 4.5. It
i around each node thereby ensuring the locality
is defined on small supports
i within the
of the approximation; the overlapping situation of the supports
domain is called cover. The weighting function may also be chosen individually
for each node, then we write wi (x xi ).
19
N
X
i=1
N
X
w (x xi ) 2p (xi ) pT (xi ) a (x) ui
=
=
0
0
.
= ..
= 0.
= 0
= 0.
Eliminating the constant factor and separating the right hand side gives
N
X
i=1
N
X
i=1
w (x xi ) p (xi ) ui .
Solving this for a (x) and then replacing a (x) in the local approximation leads
to
Lx u (x)
= pT (x) a (x)
"N
#1 N
X
X
T
T
= p (x)
w (x xi ) p (xi ) p (xi )
w (x xi ) p (xi ) ui .
i=1
i=1
In order to extend this local approximation to the whole domain, the socalled moving-procedure is introduced to achieve a global approximation. Since
the point x can be arbitrary chosen, one can let it move over the whole domain,
x x, which leads to the global approximation of u (x) [99]. Mathematically
a global approximation operator G is introduced with
u (x) Gu (x) = uh (x) ,
where the operator G is another mapping, defined as Gu (x) = lim xx Lx u (x)
and can be interpreted as the globalization of the local approximation operator
Lx through the moving process [99]. Finally we obtain
"N
#1 N
X
X
w (x xi ) p (xi ) pT (xi )
w (x xi ) p (xi ) ui . (4.1)
uh (x) = pT (x)
i=1
i=1
20
uh (x) = Gu (x) =
with
M (x) =
N
X
i=1
and
B (x) =
[M (x)]
| {z }
(k k)
B (x)
| {z }
(k N )
w (x xi ) p (xi ) pT (xi )
u (x) =
N
X
i (x) ui = (x) u,
i=1
21
evaluate shape functions at arbitrary many points, but without knowing the
shape functions explicitly. In the literature this is sometimes called evaluating a
function digitally, as we do not know it in an explicit continuous (analogous)
form.
u
|{z}
(N 1)
4.2
1
= pT
B+
,k M
pT M1
,k B +
(4.2)
pT M1 B,k ,
1
with M1
M,k M1 . The second derivatives are
,k = M
1
T
1
T
1
T
B + pT
B,l +
,kl (x) = p,kl M
,k M,l B + p,k M
1
1
1
T
T
pT
,l M,k B + p M,kl B + p M,k B,l +
1
pT
B,k
,l M
+p
M1
,l B,k
+p M
(4.3)
B,kl ,
1
M,l M1 M,k M1 M1 M,kl M1 + M1 M,k M1 M,l M1 .
with M1
,kl = M
In [16] Belytschko et al. propose an efficient way by means of a LU decomposition
of the k k system of equations to compute the derivatives of the MLS shape
functions.
As an example, Fig. 2 shows shape functions and their derivatives in a onedimensional domain = [0, 1] with 11 equally distributed nodes. The weighting
functions discussed in detail in subsection 4.5 have a dilatation parameter
of = 3 x = 0.3. The following important properties can be seen:
The dashed
P line in the upper picture shows that the sum of the shape
functions
i i (x) equals 1 in the whole domain, thus {i } builds a
PU. The derivatives of the MLS-PU build Partition of Nullities (PNs),
P
P 2 i (x)
i (x)
i.e. i x
= i x
= 0.
2
22
4.2
23
The rational shape functions themselves are smooth and can still be regarded to be rather polynomial-like, but the derivatives tend to have a
more and more non-polynomial character. This will cause problems in
integrating the integral expressions of the weak form, see subsection 6.2.
Furthermore, the effort to evaluate the MLS shape function at each integration point might not be small as a matrix inversion is involved.
MLS functions
1
The shape functions are not interpolating, i.e. they do not possess the
Kronecker delta property. That means, at every node, there is more than
one shape function 6= 0. Thus the computed values of a meshfree approximation are not nodal values. Due to this fact, they are sometimes called
fictitious values. To have the real values of the sought function at a
point, all influences of shape functions which are non-zero here have to
be added up. The non-interpolant character makes imposition of essential
boundary conditions difficult (see subsection 6.1). The lack of Kronecker
Delta property is also a source of difficulties in error analysis of MMs for
solving Dirichlet boundary value problems [59].
sum of functions = PU
0.9
function value
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
u (x) =
N
X
40
30
20
4
sum first derivatives = PN
2
0
2
4
10
0
10
20
30
40
i (x) ui
i=1
50
function value
We can use a different starting point for the deduction of the MLS functions.
A Taylor series expansion is the standard way to prove consistency of a certain
order and we can the other way around use it to construct a consistent
approximation. If we want to approximate a function u (x) in the form
10
function value
4.2.2
0.8
domain
X
(xi x)
D u (x) .
||!
||=0
i (x) is chosen to be i (x) = pT (x) a (x) w (x xi ), which can be interpreted as a localized polynomial approximation. For computational reasons we
write (xi x) as the argument of p instead of (x). As p only builds the basis of
our approximation, there is no loss in generality due to this shifting. Inserting
50
8
10
0
60
0.2
0.4
0.6
domain
0.8
70
0
0.2
0.4
0.6
0.8
domain
Figure 2: Partition of Unity functions and derivatives constructed with the MLS
technique.
24
uh (x)
N
X
i=1
N
X
N
X
i=1
i (x) ui
i=1
. . . + (xi x)
(xi x)
|1 |!
... +
ak (x) w (x xi )
(xi x)
|k |!
2
(xi x)
u (x) +
D u (x) +
|2 |!
!#
k
D u (x) + . . .
N
X
i=1
N
X
i=1
(xi x)
(xi x)
a1 w (x xi ) + . . . + (xi x)
a1 w (x xi ) + . . . + (xi x)
1uh (x) =
uh (x) =
+ error
= 0
(x x)k
i
|k |!
= 0.
1
0
..
.
= p (0) .
Solving this for a (x) and inserting the result into the approximation finally
gives
uh (x)
N
X
i (x) ui
i=1
N
X
i=1
N
X
N
X
i=1
+ error
= 1
Writing 1 ! = 0! = 1 and neglecting all other k ! terms as constants in homogenous equations and rearranging so that the vector of unknowns is extracted
gives in matrix-vector notation
i=1
0D u (x)
| {z }
equation k
+ ... +
0
+ ... +
(x x)1
i
|1 |!
(x x)2
i
ak w (x xi )
|2 |!
i=1
ak w (x xi )
(xi x) a1 w (x xi ) + . . . + (xi x) ak w (x xi )
=
1D u (x) + 0D u (x)
| {z }
| {z }
equation 1
equation 2
1u (x)
+
0
u (x)
+
error
.
..
. = ..
N
X
Comparing the coefficients on the left and on the right hand side shows that all
terms on the right hand side referring to the derivatives of u (x) must cancel out.
If this could be fulfilled, the exact solution could be reached, however, in general
an error term will remain. Our vector of unknowns consists of k components (k
depends on the dimension and consistency)
and so k equations can be derived
out of the above expression. Note that 1 = 0 and thus 1 = (0, . . . , 0) and
1
thus D u (x) = u (x).
1uh (x) =
25
X
(xi x)
pT (xi x) a (x) w (x xi )
D u (x)
||!
i=1
||=0
"
N
X
1
2
=
(xi x) a1 (x) w (x xi ) + (xi x) a2 (x) w (x xi ) +
4.2
pT (xi x) a (x) w (x xi ) ui
pT (xi x)
T
p (0)
"
N
X
i=1
"
N
X
i=1
w p (xi x) pT (xi x)
T
w p (xi x) p (xi x)
#1
#1
p (0) w ui
p (xi x) w ui .
We can now shift the basis another time by adding +x to all arguments which
26
leads to
uh (x) =
N
X
pT (x)
i=1
T
= p (x)
"
"
N
X
i=1
N
X
i=1
w (x xi ) p (xi ) pT (xi )
T
w (x xi ) p (xi ) p (xi )
#1
#1
N
X
i=1
p (xi ) w (x xi ) ui
w (x xi ) p (xi ) ui ,
which is exactly the expression obtained via the MLS procedure of the previous
subsection.
4.2.3
4.2
27
Lines 2, 3, . . . , k of Eq. 4.5 are subtracted from the corresponding lines in Eq.
4.4 which results into
p1 (x)
p1 (x)
p2 (x) p2 (x)
p2 (xi ) p2 (x)
p3 (xi ) p3 (x)
i (x)
= p3 (x) p3 (x)
.
.
..
..
i
pk (xi ) pk (x)
pk (x) pk (x)
X
It is shown in subsection 2.5 that functions of nth order consistency satisfy the
following equations:
X
i (x) p (xi ) = p (x) ,
(4.4)
X
i
a (x) =
which is a system of k equations. In this subsection the functions i are determined by directly fulfilling these k equations. i has the form i (x) =
pT (x) a (x) w (x xi ) where we write for computational reasons instead of p (x)
the shifted basis
p1 (x)
p2 (xi ) p2 (x)
..
.
pk (xi ) pk (x)
P
The first line in our system of k equations is i i (x) p1 (xi ) = p1 (x) and due
to
P p1 (x) = 1 (see subsection 2.4) and thus p1 (xi ) = 1 as well, we can write
i i (x) 1 = 1. Multiplying this with p (x) on both side gives
X
i
(4.5)
X
i
w (x xi ) p (x) p
= p (x)
(x)
#1
p (x) .
N
X
i (x) ui
i=1
N
X
i=1
N
X
pT (x) a (x) w (x xi ) ui
pT (x)
i=1
"
X
i
w (x xi ) p (x) pT (x)
#1
p (x) w (x xi ) ui .
One may shift the basis with (0, +p2 (x) , . . . , +pk (x)) which gives p (x)
p (xi ) and p (x) p (x) and thus one obtains after some rearranging
"N
#1 N
X
X
h
T
T
u (x) = p (x)
w (x xi ) p (xi ) p (xi )
w (x xi ) p (xi ) ui .
i=1
i=1
This is exactly the same approximation as found with the MLS approach shown
in subsection 4.2.1.
28
4.2.4
(l)
Jx (a) =
j=1 i=1
h j
i2
j
j
w( ) (x xi ) D pT (xi ) a (x) D u (xi ) .
The unknown vector a (x) is again obtained by minimizing this norm as in the
x (a)
= 0, that is
standard MLS by setting Ja
2
q X
N
X
j=1 i=1
j
j
j
w( ) (x xi ) D p (xi ) D ui
N
X
i=1
4.2.5
i=1
j
j
j
w( ) (x xi ) D p (xi ) D ui
= 0.
It can also been shown that the MLS with the polynomial basis of 0-th order
consistency, hence p (x) = [1], leads to the same result:
h
u (x)
= p (x)
= 1
"
N
X
i=1
N
X
i=1
The MLS system of equations is still of the same order k k and the extra effort
lies in building q times the sum over all points, with q k being the number of
derivatives which shall be included in the approximation as unknowns [2].
Without repeating the details of the moving procedure, the approximation
will after solving the system for the unknown a (x) be of the form
"N
#1
q
X
X
j
j
j
h
T
T
p (x)
w( ) (x x ) D p (x ) D p (x )
u (x) =
j=1
w (x xi )
i (x) = PN
.
i=1 w (x xi )
j
j
j
w( ) (x xi ) D p (xi ) D pT (xi ) a (x)
29
with
With the GMLS it is possible to treat the derivatives of a function as independent functions. This can for example be important for the solution of 4th
order boundary value problems (e.g. analysis of thin beams), where displacement and slope BCs might be imposed at the same point (which is not possible
in 2nd order problems) [2]. In this case, not only the values ui of a function are
unknowns but also their derivatives up to a certain degree. The local approximation is then carried out using the following weighted discrete H l error norm
instead of the above used L2 error norm:
q X
N
X
4.2
"
N
X
i=1
w (x xi ) p (xi ) p (xi )
w (x xi ) 1 1
#1
N
X
i=1
#1
N
X
I=1
w (x xi ) p (xi ) ui
w (x xi ) 1 ui
w (x xi )
ui .
PN
i=1 w (x xi )
Thus, Shepards method is clearly a subcase of the MLS procedure with consistency of 0-th order. Using the Shepard method to construct a meshfree PU
has the important advantage of low computational cost and simplicity of computation. The problem is clearly the low order of consistency which make the
Shepard PU fail for the solution of even second order boundary value problems.
But, it shall be mentioned that ways have been shown to construct a linearprecision (=first order consistent) PU based on the Shepards method with only
small computational extra effort [79]. Furthermore, it is mentioned in [17] that
in fact Shepard functions have been used for the simulation of second-order
PDEs, showing that consistency is sufficient (stability provided) but may not
be necessary for convergence.
Another approach is to introduce unknowns of the derivatives and then use
star nodes to determine the derivative data. The closest nodes are chosen
as star nodes and there must be at least two star nodes in order to be able to
construct a basis with linear precision. In this case, the problem is that there
are more unknowns (having different physical meanings) and the undesirable
effect that it may well lead to numerical difficulties due to the conditioning of
the global matrices [79].
There is also a way to compute a first order consistent PU based on Shepards
method only using one type of nodal parameter, thus only the values of the
sought function at nodes shall occur as degrees of freedom and the number of
30
N
X
i=1
N
X
i=1
w uh (xi ) u (xi )
w pT (xi ) a ui
2
2
All least-squares schemes can be motivated from this starting point [115], as
can be seen in the following, see also Fig. 3.
The Standard Least-Squares method (LSQ) results, where the func2
PN
tional that has to minimized becomes J = i=1 pT (xi ) a ui .
The main drawback of the LSQ approach is that the approximation
rapidly deteriorates if the number of points N used, largely exceeds
that of the k polynomial terms in the basis p. From the minimization the system of equations for a becomes
N
X
i=1
p (xi ) pT (xi ) a =
N
X
p (xi ) ui .
i=1
The unknowns a take one certain value which can be inserted in the
approximation.
w = wj (xi ): Choosing w like this leads to the Fixed Least-Squares method
(FLS). For the approximation of u at a certain point x a fixed weighting function wj is chosen due to some criterion. This function wj
31
O
nate et al. pointed out in [115] that any least-squares scheme can be used for
an approximation, hence for obtaining certain shape functions. The basic idea
is always to minimize the sum of the square distances of the error at any point
weighted with a certain function w, hence to minimize
w = 1:
4.2
i=1
N
X
wj (xi ) p (xi ) ui .
i=1
w (x xi ) p (xi ) pT (xi ) a =
N
X
i=1
w (x xi ) p (xi ) ui .
32
4.3
33
i=1
N
X
wi (x) p (xi ) ui .
i=1
4.3
The RKPM is motivated by the theory of wavelets where functions are represented by a combination of the dilatation and translation of a single wavelet.
Reproducing kernel methods are in general a class of operators that reproduce
the function itself through integration over the domain [97]. Here, we are interested in an integral transform of the type
Z
h
u (x) =
K (x, y) u (y) dy .
w(x) =1
wj (x i )
wj (x i +1)
wj (x i 1)
x i 1 x xi
w(xxi 1)
x i 1 x xi
xi+1
wi 1(x)
w(xxi )
wi (x)
wi +1(x)
w(xxi +1 )
x i 1xxxxxxi
xi+1
x i 1 x xi
Clearly, if the kernel K (x, y) equals the Dirac function (x y), the function
u (x) will be reproduced exactly. It is important to note that the reproducing
kernel method (RKM) is a continuous form of an approximation. However, for
the evaluation of such an integral in practice, the RKM has to be discretized,
hence
N
X
uh (x) =
K (xi x, x) ui Vi .
i=1
This discrete version is then called reproducing kernel particle method (RKPM).
xi+1
xi+1
34
C (x, x y) w (x y) u (y) dy ,
where w (x y) is a weighting function (in the context of the RKPM also called
window function). If we had K (x, y) = w (x y) the approximation would not
be able to fulfill the required consistency requirements (which is the drawback
of wavelet and SPH method [94], see subsection 5.1). Therefore, the kernel
K (x y) = w (x y) has been modified with a correction function C (x, y) so
that it reproduces polynomials exactly, leading to K (x, y) = C (x, y) w (x y).
To define the modified kernel the correction function has to be determined such
that the approximation is n-th order consistent. Several approaches are shown
in the following.
1.) This approach has been proposed in [98]. Here, we want to represent
a function u (x) with the basis p (x), hence uh (x) = pT (x) a (remark: Li et
al. write a although in the result it becomes clear that it is not constant for
changing x). In order to determine the unknown coefficients a both sides are
multiplied by p (x) and an integral window transform is performed with respect
to a window function w (x y) to obtain
u (x)
= pT (x) a
p (x) u (x)
p (y) w (x y) u (y) dy
= p (x) p (x) a
Z
=
p (y) pT (y) w (x y) dy a.
y
This is a system of equations for determining a. Solving for a and inserting this
into uh (x) = pT (x) a gives finally
uh (x) = pT (x)
"Z
w (x y) p (y) pT (y) dy
#1Z
4.3
a (x) for computational reasons, see subsection 4.6). A continuous
pT xx
Solving for a (x), inserting this into Lx u (x) = pT (x) a (x) gives
h
u (x) = p (x)
Z
w (x x) p (x) p (x) dx
2.) This approach uses the moving least-squares idea in a continuous way
and was proposed in [33, 99]. We start with a local approximation u (x)
Lx u (x) = pT (x) a (x) (note, in the original papers this is chosen as uh (x) =
1Z
w (x x) p (x) u (x) dx .
u (x) = p (x)
"Z
w (x y) p (y) p (y) dy
#1Z
w (x y) p (y) u (y) dy .
w (x y) p (y) u (y) dy .
35
3.) This approach works with help of a Taylor series expansion as done in
[25]. It starts with
Z
C (x, y) w (x y) u (y) dy .
uh (x) =
y
36
pT (y x) a (x) w (x y)
||=0
Z
y
(y x) a1 w (x y) + . . . + (y x) ak w (x y)
(y x)
(y x)
a1 w (x y) + . . . + (y x)
a1 w (x y) + . . . + (y x)
ak w (x y)
(y x)1
|1 |!
dy = 1
dy = 0
(y x)k
dy = 0,
|2 |!
|k |!
..
.
. = ..
w (x y) p (y x) p (y x) a (x) dy
C (x, y) = pT (y x)
p (y x)
1
0
..
.
0
= p (0) .
w (x y) p (y x) pT (y x) dy
"Z
#1
p (0) .
w p (y x) p (y x) dy
T
w p (y x) p (y x) dy
uh (x) = pT (x)
"Z
w (x y) p (y) pT (y) dy
#1Z
#1
p (0) w u (y) dy
p (y x) w u (y) dy .
#1Z
w (x y) p (y) u (y) dy .
(4.6)
One can thus see that all three approaches of the RKM give the same resulting
continuous approximations for u (x). And also the similarities of the RKM and
the MLS can be seen. The important difference is that the MLS uses discrete
expressions (sums over a number of points), see Eq. 4.1, whereas in the RKM we
have continuous integrals, see P
Eq. 4.6. For example the discrete moment matrix
N
M (x) of the MLS is M (x) = i=1 w (x xi ) p (xi R) pT (xi ) whereas the continuous moment matrix M (x) of the RKM is M (x) = yw (x y) p (y) pT (y) dy .
The modified kernel K (x, y) fulfills consistency requirements up to order n.
The correction function of the modified kernel can be identified as
#1
"Z
= pT (x)
Solving for a (x) and inserting this into the correction function gives
"Z
"Z
After shifting the basis another time the final approximation is obtained as
C (x, y)
= p (0)
(y x)2
ak w (x y)
The following steps are identical as shown for the Taylor series expansion for
expressions out and compare the coefficients of the terms D u (x). This leads
to the following system of equations:
Z
37
X
(y x)
D u (x) dy .
||!
Z
4.3
w (x y) p (y x) pT (y x) dy
= pT (x) [M (x)]
p (y)
p (y) .
This correction function most importantly takes boundary effects into account.
Therefore, the correction function is sometimes referred to as boundary correction term [94]. Far away from the boundary the correction function plays almost
no role [97, 98].
To evaluate the above continuous integral expressions numerical integration,
thus discretization, is necessary. This step leads from the RKM to the RKPM.
This has not yet directly the aim to evaluate the integrals of the weak form
of the PDE, but more to yield shape functions to work with. To do this, an
38
admissible particle distribution (see subsection 4.6) has to be set up [99]. Then
the integral can be approximated as
Z
uh (x) =
C (x, y) w (x y) u (y) dy
y
N
X
i=1
C (x, xi ) w (x xi ) ui Vi
= pT (x) [M (x)]
N
X
i=1
p (xi ) w (x xi ) ui Vi .
(4.7)
N
X
i=1
w (x xi ) p (xi ) pT (xi ) Vi
The choice of the integration weights Vi , hence the influence of each particle in the evaluation of the integral or more descriptive the particles lumped
volume, is not prescribed. However, once a certain quadrature rule is chosen, it
should be carried out through all the integrals consistently. The choice Vi = 1
leads to exactly the same RKPM approximation as the MLS approximation
[99]; compare Eq. 4.1 and 4.7. The equivalence between MLS and RKPM is a
remarkable result which unifies two methodologies with very different origins, it
has also been discussed in [17] and [87]. Belytschko et al. claim in [17]:
Any kernel method in which the parent kernel is identical to the
weight function of a MLS approximation and is rendered consistent
by the same basis is identical. In other words, a discrete kernel
approximation which is consistent must be identical to the related
MLS approximation.
In [87], Li and Liu make the point that the use of a shifted basis p (x xi )
instead of p (x) may not be fully equivalent in cases where other basis functions
than monomials are used.
It should be mentioned that Vi = 1 cannot be called a suitable approximation of an integral in general. Consider the following example, where the
4.3
39
R
PN
integral 1d shall be performed with
Vi . Choosing Vi = 1 does
PNi=1
PN
not make sense in this case as the result i=1 Vi = i=1 1 = N will only be
dependent of the number of integration points N without any relation to the
integral itself.
In case of integrating the above RKPM expressions one can find that Vi
appears in two integrals. The one for the moment matrix M (x) is later inverted.
So for constant Vi = c, c < the same functions will result for any c. Hence,
for constant VP
i it is not important whether or not the result is a measure
of the domain ( N
i=1 Vi = meas {}). However, more suitable integration
weights Vi might be employed, with Vi not being constant but dependent of
the particle density, dilatation parameter etc. Then, other PU functions than
the standard MLS shape functions are obtained.
To the authors knowledge no systematical studies with Vi 6= 1 have yet
been published. However, we mention [1] as a paper, where experiences with
different choices for Vi have been made. There, correct volumes Vi , Vi =
1 and Vi = random values have been tested. It is mentioned that consistency
of the resulting RKPM shape functions may be obtained in any of these three
cases, but other values than Vi = 1 do not show advantages; for the random
values the approximation properties clearly degrade. Therefore in the following,
we only consider Vi = 1 where RKPM equals MLS, but keep in mind that there
is a difference between RKPM and MLS (see [86, 94] for further information)
which in practice seems to be of less importance.
Important aspects of the RKM and RKPM shall be pointed out in more
detail in the following, where some citations of important statements are given.
Due to the discretization procedure error terms are introduced, the amplitude and phase error terms (APETs). The reproducing conditions and the
criterion to derive the correction function are different from that of the continuous system, differing in the APETs [95]. From the discrete Fourier analysis, we
know that APETs are the outcome of the system discretization. For the general
case, the APETs decrease as the dilatation parameter increases, but the APETs
can not be eliminated from the reproducing process. Another error term arises
in the higher order polynomial reproduction (higher than the order of consistency), and can be called reproduction error. This error is introduced by the
higher-order derivatives and is proportional to the dilatation. This means that
a larger dilatation parameter will cause higher reproduction errors, while the
APETs decrease. Therefore, we find a rise and a fall in the error distribution
with varying dilatation parameter [95].
This can also be seen in Fig. 4 for example results which are produced with
a Bubnov-Galerkin MM and a collocation MM applied for the approximation
40
10
error
10
4.4
10
10
10
Figure 4: Rise and fall in the error for varying dilatation parameter.
of the one-dimensional advection-diffusion equation. It can clearly be seen that
the Bubnov-Galerkin MM gives much better results with small rises and falls in
the error plot while the collocation MM shows a strong (and not predictable)
dependence on the dilatation parameter and gives worse results in all cases.
However, it should already be mentioned here that Bubnov-Galerkin MMs are
much more computationally demanding than collocation MMs.
Another important conclusion from the study of the Fourier analysis is that
the resolution limit and the resolvable scale of the system are two different
issues. The resolution limit, solely determined by the sampling rate, is problem
independent and is an unbreakable barrier for discrete systems. On the other
hand, the resolvable scale of the system is dictated by the interaction between
the system responses, especially its high scale non-physical noise, and the choice
of interpolation functions. The Fourier analysis provides the tools to design
better interpolation functions which will improve the accuracy of the solution
and stretch the resolvable scale toward the resolution limit. [95]
For a global error analysis of the meshfree RKPM interpolants under a global
regularity assumption on the particle distribution, see [59]; for a local version
applicable to cases with local particle refinement see [60].
4.3.1
q
X
j=1
10
10
41
Particle Placement
Hermite RKPM
The name Hermite RKPM stems from the well-known Hermite polynomials
used in the FEM for the solution of forth order boundary value problems. In
p (x)
T
"Z
w
y
( j )
(x y) p (y) p (y) dy
w( ) (x y) p (y) D u (y) dy
j
#1
4.4
Particle Placement
4.5
Weighting Functions
The weighting functions of MLS and RKPM are translated and dilatated. The
ability to translate makes elements unnecessary, while dilatation enables refinement [94].
Both meshfree methods MLS and RKPM which have been used for the
construction of a PU of consistency n, used a weighting (also: kernel or window)
42
4.5
Weighting Functions
43
function w which still not has been discussed. As the methods have different
origins the motivation for introducing weighting functions are different.
The MLS has its origin in interpolating data and the weighting function has
been introduced to obtain a certain locality of the point data due to the compact
support. The moving weight function distinguishes the MLS from other leastsquares approaches. If all weighting functions are constant, then uh (x) is a
standard non-moving least-squares approximation or regression function for u.
In this case the unknown vector a (x) is a constant vector a and all unknowns
are fully coupled.
The RKPM with its origin in wavelet theory uses the concept of the weighting
function already as its starting point: the integral window transform. It can
be easily seen that this continuous approximation turns out to be exact, if the
weight function w (x y, ) equals the Dirac function (x y) . However, in
the discrete version of this RKM, the RKPM, the Delta function has to be used
in numerical integration and thus other functions with small supports have to
be used [107].
Despite of these rather different viewpoints due to the similarity of the resulting methods, there is also a close similarity in the choice of the weighting
functions for MLS and RKPM. The most important characteristics of weight
functions are listed in the following.
Lagrangian and Eulerian kernels In MMs the particles often move through
the domain with certain velocities. That is, the problem under consideration
is given in Lagrangian formulation, rather than in Eulerian form where particles are kept fixed throughout the calculation. Also the weighting (=window) function may be a function of the material or Lagrangian coordinates
X, wi (X) = w (kX Xi k , ), or of the spatial or Eulerian coordinates x,
wi (x) = w (kx xi (t)k , ). The difference between these two formulations
may be seen in Fig. 5a) and b), where particles move due to a prescribed nondivergence-free velocity field. It is obvious that the shape of the support changes
with time for the Lagrangian kernel but remains constant for the Eulerian kernel.
i of a weight function wi
Size and shape of the support The support
differs in size and shape, the latter including implicitly the dimension of the
PDE problem under consideration. Although any choice of the support shape
might be possible, in practice spheres, ellipsoids and parallelepipeds are most
frequently used. The size and shape of the support of the weight function is
44
directly related to the size and support of the resulting shape function, and
i (x) = 0 {x |wi (x) = 0 }.
4.5
45
Weighting Functions
6q
+
8q
3q
q1
4th order spline :
w (q) C 2 =
0
q>1
k
q1
1 q2
2k th order spline : w (q) C k1 =
q>1
0
k
q1
q 1
0
singular:
w (q) C
=
0
q
>1
(q/c)2k
q
1
e
exponential 1 : w (q) C 1 =
q
>
1
0
(
exponential 2 :
w (q) C 0
exponential 3 :
w (q) C
(q/c)2k
(1/c)2k
e
1e(1/c)2k
( 0
2
e1/(q 1)
0
1 ,
,
,
,
,
q1
,
q>1
q1
,
q>1
ik
. The difference between the two exponential weighting
where q = kxx
functions is that version 1 is not zero at the boundary of the support, because
2k
w(1) = e(1/c) 6= 0, thus it is not continuous (C 1 ). Version 2 fixes this lack
2k
as it shifts the weighting function by subtracting e(1/c) to have w(1) = 0
2k
and then divides through 1 e(1/c) to still have w(0) = 1. In Fig. 6 the 3rd
th
and 4 order spline weighting functions are shown together with the Gaussian
weighting function (version 2) for different values of c and k = 1.
46
w(|xxi|/)
0.8
0.6
0.4
0.2
0
1
|xx |/
i
4.6
47
4.6
In both methods, MLS and RKPM, in order to evaluate the n-th order consistent
shape functions at a certain point x a k k matrix, the moment matrix M (x),
must be inverted, i.e. a system of equations must be solved. The parameter k,
which defines the size of this system, equals the number of components in the
intrinsic basis p (x), and thus depends on the dimension of the problem d and
the consistency order n, see subsection 2.4. In order to evaluate the integral
expressions of the weak form of a PDE problem, a large number of integration
points xQ has to be introduced. At each of these points the k k system has
to be built and to be solved.
The need to build up and invert the moment matrix at a large number of
points is the major drawback of the MMs, because of the computational cost
and the possibility that the matrix inversion fails (in contrast to the FEM). The
computational cost consists in evaluating summation expressions including a
neighbour search and in matrix inversion itself. Furthermore, the computation
of the derivatives of the shape functions involves large number of (small) matrixmatrix and matrix-vector multiplications, see Eqs. 4.2 and 4.3.
48
4.7
49
PN
xxi
p (0). When the dilatation parameter varies at
and B (x) = i=1 w
4.7
50
51
In the previous chapter it is explained how a partition of unity with n-th order
consistency can be constructed, either mesh-based or meshfree. For this purpose
a basis p (x) was introduced, which is called intrinsic basis.
The next step is to define the approximation. Most of the MMs to be
discussed below simply use an approximation of the form
uh (x) =
N
X
i (x) ui = T (x) u
i=1
so that the PU functions {i (x)} are directly taken as trial functions in the
approximation. However, there exists also the possibility to use an extrinsic
basis in the approximation which can serve the following purposes: Either to
increase the order of consistency of the approximation or to include a priori
knowledge of the exact solution of the PDE into the approximation. This will
be discussed in more detail in subsections 5.7 and 5.8.
Summarizing this, it can be concluded that the approximation can either
be done with usage of an intrinsic basis only or with an additional extrinsic
basis. After defining the approximation uh (x) and inserting this for u (x) in
the weak form of the method of weighted residuals, test functions have to be
defined. Then the integral expressions of the weak form can be evaluated and
the system matrix and right hand side can be constructed.
With definition of the partition of unity, the approximation and the test
functions the herein considered methods can be clearly identified as shown in
Fig. 1 on page 16.
Before discussing the individual MMs in the following subsections a few
general remarks shall be made with respect to collocation and Galerkin MMs;
52
5.1
53
5.1
The SPH method was introduced in 1977 by Lucy in [104], Monaghan worked
this further out in [107] by using the notion of kernel approximations. The
SPH is a Lagrangian particle method and is in general a representative of a
strong form collocation approach. SPH is the first and simplest of the MMs,
it is easy to implement and reasonably robust [108]. SPH was motivated by
ideas from statistical theory and from Monte Carlo integration and was first
used for astronomical problems [52]. The name SPH stems from the smoothing
character of the particles point properties to the kernel function, thus leading
to a continuous field.
First we consider the statistical viewpoint, which is the origin of the SPH,
N
MX
w (x xi ) ,
N i=1
h (x) dx .
Now we leave this special consideration for the approximation of the density
in fluid problems and generalize the above for the approximation of an arbitrary
54
uh (x)
=
=
N
X
i=1
i=1
w (x xi ) Vi u (xi ) .
N
X
i=1
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
domain
w (x xi ) Vi ui
X
(x
x)
i
w (x xi ) Vi u (x) +
D u (x)
||!
i=1
N
X
0.9
Figure 7: Shape functions constructed with the SPH. They do not build a PU,
in particular not near the boundary.
||=1
w (x xi ) Vi u (x) + error.
PN
Thus, the kernel sum i=1 w (x) Vi must equal 1 in the whole domain to fulfill
this equation, i.e. to have an approximation of 0-th order. It is recalled that
in case of RKPM consistency could be reached due to a correction function,
which obviously misses in SPH (Vi stands for integration weights and not for
a correction term). Thus consistency cannot be reached at boundaries, where
PN
i=1 w (x) Vi 6= 1, which can be easily seen from the dotted line in Fig. 7. The
shape functions of SPH are i (x) = w (x) Vi , thus in this case of regularly
distributed nodes in one dimension, all inner shape functions are identical, due
to Vi = (xi+1 xi1 )/2 = const and only the nodes on the boundaries have
different shape functions, due to V1 = VN = Vi /2 (one may also choose
Vi = 1 for all nodes).
The lack of consistency near boundaries leads to a solution deterioration near
the domain boundaries, also called spurious boundary effects [25]. SPH also
shows the so-called tension instability, first identified by Swegle et al. [127],
which results from the interaction of the kernel with the constitutive relation.
It is independent of (artificial) viscosity effects and time integration algorithms.
It has been shown by Dilts in [38] that the tension instability is directly related
1.001
function value
uh (x)
w (x y) u (y) dy
The SPH was introduced for unbounded problems in astrophysics [52, 108]
and applying it to bounded cases leads to major problems. This is due to its failure to meet the reproducing conditions of even 0-th order near the boundaries.
This can be easily shown by a Taylor series analysis. For 0-th order consistency
we need
N
X
55
function value
PDE:
5.1
0.999
0.998
0.2
0.4
0.6
0.8
domain
Figure 8: Detail of the kernel sum of the SPH shape functions of Fig. 7. These
small oscillations give rise to instabilities in SPH.
56
to the appearance of oscillations in the kernel sums (which in SPH are not
exactly 1, as SPH shape functions do not form a partition of unity) . This
can be seen in Fig. 8. If these oscillations can be eliminated or in other words:
if consistency can be reached, the tension instability vanishes [38]. It is also
important to note that the tension instability is a consequence of using Eulerian
kernels (see subsection 4.5) in a Lagrangian collocation scheme; it does not occur
for Lagrangian kernels [14].
Another instability in SPH (and other collocation MMs) results from the
rank deficiency of the discrete divergence operator [14] and occurs for Eulerian
as well as for Lagrangian kernels.
There are several ideas to stabilize these problems. One approach is to use
so-called stress points [44, 119]. The name is due to the fact that stresses are
calculated at these points by the constitutive equation in terms of the particle
velocities [14]. Its extension to multi-dimensional cases is not easy as stress
points must be placed carefully [14, 119].
We summarize the idea of the SPH as follows: In SPH a computational domain is initially replaced by discrete points, which are known as particles. They
represent any field quantities in terms of its values and move with their own
(fluid) velocities, carrying all necessary physical information. These particles
can be considered as moving interpolation points. In order to move the particles correctly during a time step it is necessary to construct the forces which
an element of fluid would experience [108]. These forces must be calculated
from the information carried by the particles. The use of interpolation kernel
allows smoothed approximations to the physical properties of the domain to be
calculated from the particle information. This can be interpreted as smoothing
the discrete properties of the points over a finite region of space and hence led
to the name SPH [81].
It has already been mentioned above that the treatment of boundaries is one
major drawback in the SPH, which has been pointed out in a many references,
see e.g. [87]. In fact, it differs from the other MMs. There is no systematic way
to handle neither rigid nor moving boundaries [98]. According to [108], rigid
walls have been simulated using (a) forces with a length scale h (this mimics the
physics behind the boundary condition), (b) perfect reflection and (c) a layer
of fixed particles. The fixed particles in the latter approach are often called
ghost particles, see e.g. [119] where boundary conditions in SPH have been
intensively discussed. Natural boundary conditions are also a major problem in
SPH and collocation methods in general [14].
It should also be mentioned, concerning the h-adaptivity of the SPH, that
Belytschko et al. claim in [15] that SPH does not necessarily converge if the
5.1
57
size of the smoothing length is kept proportional to the distance between nodes
/h = const that is, a standard refinement procedure of adding particle and
simultaneously decreasing the support size may fail. In fact, convergence proofs
for the SPH assume certain more demanding relationships between nodal spacing and support size [15]. As a consequence the sparsity of the equations decreases drastically leading to a severe drop in computational efficiency.
Improvements of the standard SPH method are still an active research area
and there exists a number of other proposed correction ideas for the SPH addressing tensile instability, boundary conditions and consistency; see e.g. [87],
[15] and [119] for an overview and further references. The approaches to fix certain lacks of the SPH differ in their complexity and computational effort. We
only describe briefly two ideas of correcting the SPH, both approaches obtain
a certain consistency of the SPH shape functions. One may also interpret the
RKPM shape functions (subsection 4.3) in a collocation setting, see e.g. [1], as
a corrected SPH with the ability to reproduce polynomials exactly. The Finite
Point Method (FPM), introduced by O
nate et al. in [115], is also a consistent
collocation method which is based on fixed (Eulerian) particles in contrast to
the moving (Lagrangian) particles of the SPH.
Corrected Smoothed Particle Hydrodynamics (CSPH )
The CSPH is based on a number of correction terms to the standard SPH
with the aim to achieve first order consistent solutions without spurious modes
or mechanisms [22, 81].
Instead of w a corrected kernel w
bi (x) = wi (x) (x) [1 + (x) (x xi )] is
introduced, where and are evaluated by enforcing consistency conditions
[22]. The resulting method is called CSPH. The correction largely improves the
accuracy near or on the boundaries of the problem domain. Next, the discrepancies that result from point integration are being addressed by introducing an
integral correction vector . This enables the integration corrected CSPH to
pass the linear patch test [22].
A last correction term is introduced to stabilize the point-wise integration
of the SPH and thus prevent the emergence of spurious modes or artificial
mechanisms in the solution. The stabilization technique of the CSPH is based
on Finite Increment Calculus [81] or least-squares approaches [22]. The cause of
spatial instabilities due to point-based integration is described in detail in [22]:
The point-based integration used in the CSPH method relies on
the evaluation of function derivatives at the same point where the
function values are sought. It is well known in the finite difference
58
literature that this can lead to spurious modes for which the derivative is zero at all points considered. The simplest example of these
problem is encountered when the 1D central difference formula for
ui1
the first derivative is used as u0i = ui+12x
. Clearly, there are two
solution patterns or modes for which the above formula gives zero
derivatives at all points. The first is obtained when the function is
constant, e.g. u = 1 ua = 1 for all a. Then the result is correct
as u0 shall be zero here. The second possibility, however, emerges
when ua = (1)a . This is clearly an invalid or spurious mode. They
will not contribute towards the point integrated variational principle
and are consequently free to grow unchecked and possibly dominate
and therefore invalidate the solution obtained. It is easy to show
that spurious modes can also be found in the CSPH equations that
are used for the evaluation of the derivative.
5.2
59
5.2
The DEM was introduced by Nayroles et al. in [111]. Although they did not
note this fact the interpolants they used in their method were introduced and
studied by Lancaster and Salkauskas and others and called MLS interpolants in
curve and surface fitting [18, 103]. Nayroles et al. had a different viewpoint of
their method as a generalization of the FEM.
In [111] they consider the FEM as a special case of a least-squares procedure:
...
relations between aej and ui : {ui } = pj (xi ) {ae } = [Pn ] {ae }. If
...
ne is equal to m, the matrix [Pn ] may in general be inverted, leading
1
to the standard shape functions Ni (x): u (x) = pj (x) [Pn ] {ui } =
Ni (x) {ui }. This interpolation procedure may also be seen as minimizing the following expression with respect to ae for a given element
P e
e: J e (ae ) = ni=1 wie (ui ue (xi ))2 , where wie = 1 if node i belongs
to the element e and wie = 0 otherwise.
The basic idea of the diffuse approximation is to replace the FEM
interpolation, valid on an element, by a local weighted least-squares
fitting, valid in a small neighbourhood of a point x, and based on
a variable number of nodes. The approximation function is made
smooth by replacing the discontinuous wie coefficients by continuous
weighting functions w x (x) evaluated at xi . The vanishing of these
weighting functions at a certain distance from the point x preserves
the local character of the approximation. Around a point x, the
function ux (x) is locally approximated by an expression equivalent
60
5.3
P
x
x
to the one above: ux (x) = m
j=1 pj (x) aj . The coefficients aj corresponding to the point x are obtained by minimizing the following
Pn
2
expression: J x ax = i=1 wix ui ux (xi ) .
It can be followed that each evaluation point of the DEM may be considered
as a particular kind of finite element with only one integration point, a number
of nodes varying from point to point and a diffuse domain of influence [111]. It
can be seen that the classical FEM is just a special case of the DEM, where the
weight function is constant over selected subdomains [111].
The DEM approximation can directly be obtained by the MLS approximation, although this was not realized by Nayroles [76]. Although the shape
functions of the DEM are identical to the MLS shape functions, Nayroles et
al. made a number of simplifications:
They estimate the derivative of a function u by differentiating only p (x)
with respect to x and considering aP
(x) as a constant [111]. Thus e.g. for
T
1
the first derivative follows uh,j (x) = pT
(x) B (x) u,
,j (x) a (x) = p,j (x) M
assuming that a (x) is constant, hence a,j (x) = 0. This incorrectness introduces problems and turns out to be the major difference to the EFG
where the derivatives are obtained correctly.
They use a very low quadrature rule for integration [103]. Nayroles claims
that it is easy to introduce the DEM into existing FEM codes, by using
the existing integration points as the diffuse elements and in some cases
they even use less integration points than in FEM [111]. However, the
opposite is true and in MMs in general we need much more integration
points for accurate results.
They did not enforce EBCs accurately [103].
As a consequence the DEM does not pass the patch test, which is analogous to
a fail in consistency [103].
Petrov-Galerkin Diffuse Element Method (PG DEM)
The PG DEM is a modified version of the DEM which passes the patch
test. It was introduced by Krongauz and Belytschko in [76] rather to show the
reason why DEM does not pass the patch test than introducing a new method
in practice. This method is based on a Petrov-Galerkin formulation where test
functions are required to meet different conditions than trial functions.
Krongauz and Belytschko discovered an interesting property of DEM approximations [76]:
61
5.3
The EFG uses MLS interpolants to construct the trial and test functions [18].
In contrast to the DEM certain key differences are introduced in the implementation to increase the accuracy. These differences to the DEM are [18]:
Certain terms in the derivatives of the interpolants which were omitted
in the DEM are included, i.e. the derivatives are computed according
to Eq. 4.2.
A much larger number of integration points has been used, arranged in a
cell structure.
EBCs are enforced correctly; in the first publication in [18] by Lagrangian multipliers.
The partial derivatives of the shape functions (x) are obtained by applying
the product rule to = pT (x) M1 (x) B (x) which results into
1
T
1
(x) B (x) + pT (x) M1
(x) B,j (x) ,
,j = pT
j (x) M
,j (x) B (x) + p (x) M
62
1
with M1
M,i M1 [18]. In the DEM only the first expression in the
,i = M
sum has been considered but for accurate results, the coefficients a (x) should
T
1
not be assumed to be constant, thus pT (x) M1
(x) B,j (x)
,j (x) B (x)+p (x) M
cannot be neglected [18].
w (x xi ) qk (xi ) qj (xi ) = 0,
k 6= j.
For the given arbitrary basis functions pk (x) the orthogonal basis functions
qk (x) can be obtained by using the Schmidt orthogonalization procedure. Because of the orthogonality condition the matrix M becomes diagonal and the
coefficients a (x) can be directly obtained. The advantage of using orthogonal
basis functions is that it reduces the computational cost and improves the accuracy of interpolants when the matrix M becomes ill-conditioned [103]. The
computational costs of the orthogonalization procedure, however, are of the
same order as the costs of matrix inversion. But from the viewpoint of accuracy, orthogonalization of the basis functions may be preferred over matrix
inversion, since the orthogonalization procedure is equivalent with solving the
5.4
63
5.4
One may show that this is equivalent to using specific test functions in a PetrovGalerkin setting, i.e. a setting where test and shape functions are chosen differently. These functions may be constructed in a mesh-based way, for example by
the standard FEM functions, or in a meshfree way leading to LSMMs. LSMMs
have been described by Park et al. in [117] and Zhang et al. in [131].
The least-squares formulation of a problem has a number of well-known
distinct properties compared to Bubnov-Galerkin settings, see e.g. [70]. One of
the advantages of numerical methods approximating the least-squares weak form
is that stabilization of nonself-adjoint problems (e.g. convection problems) is
not required. A disadvantage is the higher continuity requirement on the test
and shape functions, which limits the usage of many FEM shape functions that
are often only C 0 continuous. Note that this is not a problem with MMs as they
may easily be constructed to have any desired order of continuity. We do not
further describe advantages and disadvantages of the least-squares formulation
and refer the interested reader to [70].
It is noteworthy that LSMMs show the property that they are highly robust
with respect to integration [117], i.e. even very coarse integration may be used
reliably for the evaluation of the weak form.
5.5
Global vs. Local Weak Forms Before discussing the MLPG method the
concept of a local weak form shall be introduced. It has already been pointed
out that a weak form is needed for the method of weighted residuals. We
64
separate global and local weak forms following Atluri and Shen [4]. Global
weak forms involve integrals over the global domain and boundary, while local
weak forms are built over local subdomains s with local boundaries.
This can easily be seen from the following example [4, 6], where we consider
Poissons equation 2 u (x) = p (x) in a global and a local weak form. Essential
boundary conditions are u = u on u , imposed with the penalty method and
natural BCs are u
n = q on q . This gives
Z
2 uh p d
uh u du = 0,
where is the global domain. After applying the divergence theorem the global
symmetric weak form follows as
Z
Z
Z
,i uh,i + p d
qd
uh u du = 0.
and analogously a local symmetric weak form can be reached by applying the
divergence theorem to this equation:
Z
Z
Z
,i uh,i + p d
uh,i ni d
uh u du = 0
s
Z s
Z
Z
Z u
h
h
,i uh,i + p d
u,i ni d +
u,i ni d +
qd
?
s
su
sq
uh u du
= 0.
Herein, s is the boundary of the local subdomain s , and ?s stands for the part
of s which is in the interior of the global domain. su and sq are those parts
of s lying on the boundary of the global domain
essential and natural
S where
S
BCs are applied respectively. Clearly, s = ?s su sq . This equation holds
irrespective of the size and shape of s and the problem becomes one as if we are
dealing with a localized boundary value problem over an n-dimensional sphere
i as
s [4, 6]. It is natural to choose the supports of the weighting functions
the local subdomains s which is assumed in the following.
5.5
65
66
Integral Equation (LBIE) which has been worked out for reasons of clarity
in the next subsection.
MLPG 5: The test function is the characteristic function i and thus
i.
constant over each local subdomain
MLPG 6: The test function is identical to the trial function and thus the
special case of a Bubnov-Galerkin method results. The resulting method
is similar to EFG and DEM but the latter work with the global weak form
instead of the local. If spheres are used for the subdomains the method
has also been referred to as the Method of Finite Spheres (MFS) [37].
For a short summary of the MLPG and LBIE concept, the reader is referred to
[7].
5.6
The LBIE is the meshfree (and local) equivalent of the conventional boundary element method (BEM). We shall therefore recall shortly some important
features of the BEM.
The BEM reduces the dimensionality of the problem by one through involving the trial functions and their derivatives only in the integral over the global
boundary of the domain. The BEM is based on the boundary integral equation
(BIE), which can be obtained from the weak form by choosing the test functions equal to the infinite space fundamental solution of the at least highest
order differential operator of the PDE. This restricts the usage of the BEM
to the cases where the infinite space fundamental solution is available. On the
global boundary either the value u (x) or u(x)
n is known. If some point y lies on
the boundary, the BIE can be used as an integral equation for the computation
of the unprescribed boundary quantity, respectively. In the BEM one has to
deal with strong singularities (r 1 ) and weak singularities (ln r) involved in
the integrals. Therefore some integrals have to be considered in the Cauchy
Principle Value (CPV) sense when the source point y is located on the boundary over which the integration is carried out [122]. After solving the system of
equations, defined by a full and unsymmetric matrix, the values u (x) and u(x)
n
on the global boundary are known. The evaluation of the unknown function
and its derivatives for certain single points within the domain involves the calculation of integrals over the entire global boundary, which may be tedious and
inefficient [133].
5.6
67
Summarizing this, one can say the BEM drops the dimensionality of the
problem by one, is restricted to cases where the fundamental solution is known,
involves singularities in integral expressions and leads to a full and unsymmetric
but rather small matrix. Due to the fact that an exact solution (the infinite space
fundamental solution) is used as a test function to enforce the weak formulation,
a better accuracy may be achieved in numerical calculations [133].
The objective of the LBIE method is to extend the BEM idea to meshfree
applications based on a Local Boundary Integral Equation (LBIE) approach. In
the LBIE, a problem with the artificial local subdomain boundaries s occurs,
due to the fact that for the local equations of the boundary terms neither u (x)
nor u(x)
n are known (as long as they are not on the global boundary). Therefore
the concept of a companion solution is introduced [133]. The test function is
chosen to be v = u u0 , where u is the infinite space fundamental solution
and u0 is the companion solution which satisfies a certain Dirichlet problem over
the subdomain s . Thereby one can cancel out the u(x)
n in the integral over
s [133]. Thus, by using the companion fundamental solution or modified
fundamental solution, no derivatives of the shape functions u(x)
n are needed to
construct the stiffness matrix for the interior nodes, as well as for those nodes
with no parts of their local boundaries coinciding with the global boundary of
the domain of the problem where EBCs are applied [132].
The subdomains in the LBIE are often chosen in the following way: An
d-dimensional sphere, centered at y, is chosen where for simplicity reasons the
size of s of each interior node is chosen to be small enough such that its
corresponding local boundary s will not intersect with the global boundary
of the problem domain [132]. Only the local boundary integral associated
with a boundary node contains parts of the global boundary of the original
problem domain [132].
The numerical integration of boundary integrals with strongly singular kernels requires special attention in the meshfree case of the LBIE where the boundary densities are only known digitally (e.g. in the case of MLS-approximation)
[122]. In [122], the authors claim:
In meshfree implementations of the BIE the question of singularities has to be reconsidered, because the boundary densities are not
known in a closed form any more. This is because the shape functions are evaluated only digitally at any required point. Thus, the
peak-like factors in singular kernels cannot be smoothed by cancellation of divergent terms with vanishing ones in boundary densities
before the numerical integration. The proposed method consists in
68
the use of direct limit approach and utilization of an optimal transformation of the integration variable. The smoothed integrands can
be integrated with sufficient accuracy even by standard quadratures
of numerical integration.
Compared to the conventional BEM, shortly described above, the LBIE method
has the following advantages [5]: The stiffness matrix is sparse, the unknown
variable and its derivatives at any point inside the domain can be easily calculated from the approximated trial solution by integration only over the nodes
within the domain of definition of the MLS approximation for the trial function
at this point; whereas this involves an integration through all of the boundary
points at the global boundary in the BEM.
Compared with MMs in general the LBIE is found to have the following
advantages [5]: An exact solution (the infinite space fundamental solution) is
used as a test function which may give better results, no derivatives of shape
functions are needed in constructing the stiffness matrix for the internal nodes
as well as for those boundary nodes with no EBC-prescribed sections on their
local integral boundaries (this is attractive as the calculations of derivatives of
shape functions from the MLS approximation may be quite costly [132]).
5.7
Throughout this paper the Partition of Unity FEM (PUFEM) [106], Generalized
FEM (GFEM) [123, 124], Extended FEM (XFEM) [19] and the Partition of
Unity Methods (PUM) [11] are considered to be essentially identical methods,
following e.g. [8, 9]. Thus, we do not even claim that those methods which have
the term finite element in their name necessarily rely on a mesh-based PU
(although this might have been the case in the first publications of the method).
Let us consider this element aspect in the sense of the Diffuse Element Method
(DEM, see subsection 5.2), where it has already be shown that the same shape
functions that arise in the meshfree MLS context may as well be interpreted as
diffuse elements. The treatment of this aspect is not consistent throughout the
publications and it may as well be found that e.g. the GFEM is considered a
hybrid of the FEM and PUM [123]; in contrast, other authors [8, 9] including
the authors of this paper may consider the GFEM and PUM equivalent.
5.7
69
N
X
i (x) pT (x) vi
i=1
N
X
i (x)
i=1
l
X
pT
j (x) vij .
j=1
70
5.8
71
hp-clouds
isolated from each other. There are combinations, where the local spaces multiplied by the appropriate PU functions are linearly dependent or will at least lead
to an ill-conditioned matrix. For the example when a simple mesh-based PU of
first order consistency is used (simple hat function PU) and the local approximation space is polynomial this will lead to linear dependency which can easily
be shown. PUs of MMs are not constructed with polynomials directly but rather
rational functions. However, a problem of nearly linear dependency remains
because for the construction of meshfree PUs an intrinsic polynomial bases is
used (which is the reason for the good approximation property of MMs in case
of polynomial-like solutions).
A broad theoretical background for the PUMs has been developed in [8] and
[9], where results for the conventional FEM may be obtained as specific subcases
of the PUM.
5.8
hp-clouds
The hp-cloud method was developed by Duarte and Oden, see e.g. [43]. The
advantage of this method is that it considers from the beginning the h and p
enrichment of the approximation space [49]. In contrast to MLS and RKPM, the
order of consistency can be changed without introducing discontinuities, hence
the p-version of the hp-cloud method is smooth. The features of the PUM
mainly its enrichment ability and the ability to include a priori knowledge of
the solution by introducing more than one unknown at a node and usage of a
suitable extrinsic basis are also valid for the hp-cloud method.
The approximation in the hp-cloud method is:
uh (x)
N
X
i=1
N
X
i=1
i (x) ui + pT (x) vi
i (x) ui +
l
X
j=1
pj (x) vij .
72
5.9
5.9
73
where d (xi , xj ) is the distance (Euclidean norm) between xi and xj . The socalled Sibson functions or natural neighbour functions are defined by the ratio
of polygonal areas of the Voronoi diagram, hence
There exist also the possibility to use non-Sibsonian shape functions which
was introduced by Belikov et al. in [13]; they take the form
Whereas the original hp-cloud method is a meshfree method, in [113] a hybrid method is introduced which combines features of the meshfree hp-cloud
methods with features of conventional finite elements. Here, the PU is furnished by (mesh-based) conventional lower order FE shape functions [113]. The
hp-cloud idea is used to produce a hierarchical FEM where all the unknown degrees of freedom are concentrated at corner nodes of the elements. This ensures
in general a more compact band structure than that arising from the conventional hierarchic form [113]. Thus, the enrichment of the finite element spaces
is one on a nodal basis and the polynomial order associated with a node does
not depend on the polynomial order associated with neighbouring nodes [113].
The p-convergence properties in this method differ from traditional p-version
elements, but exponential convergence is attained. Applications to problems
with singularities are easily handled using cloud schemes [113].
Natural neighbour interpolation was introduced by Sibson for data fitting and
smoothing [120]. It is based on Voronoi cells Ti which are defined as
Ti = {x R : d (x, xi ) < d (x, xj ) j 6= i} ,
i (x) =
Ai (x)
,
A (x)
T
where A (x) = Tx is the total area of the Voronoi cell of x and Ai = Ti Tx is
the area of overlap of the Voronoi cell of node i, Ti , and Tx . This may also be
seen from Fig. 9. The support of Sibson functions turns out to be complex: It is
the intersection of the convex hull with the union of all Delaunay circumcircles
that pass through node i. The shape functions are C everywhere except at
the nodes where they are only C 0 . It is possible to obtain C 1 continuity there
with more elaborate ideas. These Sibson functions have been used as test and
shape functions in the Natural Element Method (NEM) [23, 125].
74
3
x
A (x)
1
s4
h4
s1
h1
s2
h2
2
5.11
Figure 9: Construction of the Sibson and non-Sibsonian interpolant. The nodes
1, 2, 3 and 4 are called natural neighbours of x.
5.10
75
One may argue whether or not this method is meshfree or not. The originators of the MFEM claim in [68] that this method can as well be seen as a
finite element method using elements with different geometric shapes. Meshfree
ideas are only considered in the sense of finding shape functions of the arbitrary
elements.
reduce for certain cases to the standard linear FEM shape functions. Consequently, only low-order quadrature rules are necessary in the MFEM leading to
a very efficient method.
3
h3
s3
A3(x)
5.11
The Meshless Finite Element Method was proposed in [68, 69]. The method is
motivated as follows: In MMs the connectivity between nodes can always be
discovered bounded in time. However, the time for the generation of a mesh
as a starting point of a mesh-based simulation may not be bounded in time.
That is, although automatic mesh generators may find some mesh, it is not
guaranteed that the mesh quality is sufficient for convergence. Especially in
3D automatic mesh generation, undesirable large aspect ratio elements with
almost no area/volume (slivers) may result, degrading the convergence rate
considerably. The procedure of identifying and repairing these elements often
involving manual overhead may require an unbounded number of iterations.
The RKEM was recently introduced by Li, Liu, Han et al. in a series of four
papers [96, 88, 102, 121] as a hybrid of the traditional finite element approximation and the reproducing kernel approximation technique. It may be considered
as an answer to the question how to find arbitrarily smooth finite element interpolations. This old problem is addressed and discussed mainly in [88], we
may summarize from there that even C 1 continuous elements needed for the
simulation of 4th order boundary value problems are difficult to obtain in the
standard FEM.
The smoothness of the RKEM interpolation is achieved by involving RKPM
ideas as outlined in subsection 4.3. Kronecker delta property is maintained
in the RKEM, thereby simplifying the imposition of EBCs which requires
special attention for MMs, see subsection 6.1. The construction of the RKEM
interpolation may be summarized as follows:
In the MFEM, based on Voronoi diagrams, shape functions inside each polyhedron are determined using non-Sibsonian interpolation [69], see subsection 5.9.
The shape functions share Kronecker delta property. They are rather simple and
76
n
X
Nj (x) u (xj )
j=1
X X
eel ine
X X
?
Ne,i
(x) e (x) u (xe,i ) .
eel ine
ine
The kernel is evaluated in a way that consistency of the interpolation is maintained. The same methodology as shown for the RKPM is used for this purpose,
including the idea of a correction function and the solution of a small system of
equations in order to obtain consistency. The continuity of the resulting interpolation only depends on the continuity of the involved window functions which
localizes the global partition polynomials.
The resulting shape functions of the RKEM are considerably more complex
than standard FEM shape functions, see [96, 88, 102, 121] for graphical representations of the smooth but pretty oscillatory functions. This clearly leads a
large number of integration points in order to evaluate the weak form of a problem. In [121], numerical experiments in two dimensions have been performed
with up to 576 quadrature points per element.
Finally, it should be mentioned that there is a relation between the Moving
Particle Finite Element Method (MPFEM), introduced by Hao et al. [61, 62] and
the RKEM. The concept of globalizing element shape functions and employing
RKPM ideas to obtain consistency is also part of the MPFEM. However, it
was mentioned in [96] that the nodal integration instead of full integration (see
subsection 6.2) leads to numerical problems in the MPFEM.
5.12
Others
The number of MMs reviewed in this paper must be limited in order to keep
it at reasonable length. We considered most methods which are mentioned and
listed again and again in the majority of the publications on MMs. So it is
5.13
77
our belief that we hopefully covered what people mean when they use the term
meshfree methods.
We exclude all methods that have been constructed for certain specific
problems e.g. for fluid problems like the Finite Volume Particle Method
(FVPM), the Finite Mass Method (FMM), moving particle semi-implicit method
(MPS) etc. Also meshfree methods from the area of molecular dynamics (MD),
the Generalized Finite Difference Method (GFDM), Radial Basis Functions
(RBF), Local Regression Estimators (LRE) and Particle in Cell methods (PIC)
are not considered. Although all these and many other methods are meshfree
in a sense, we believe that they do not directly fit into the concept of this paper
although relations undoubtedly exist.
5.13
78
79
tegrals onto surface integrals via the divergence theorem for most of the integral
expressions in the weak form possible and thereby reduces the dimension of the
integration domain by one. This can save computing time significantly. However, own experiences gave unsatisfactory results for many problems, including
advection-diffusion, Burgers and Navier-Stokes equations. The authors of this
method, Shen and Atluri in [4], obtained good results for Poissons equation but
did not use it for the solution of flow related problems.
LSMMs solve the least-squares weak form of a problem. Advantages and
disadvantages of these methods are well known, see e.g. [70]. It has been found
that LSMMs are considerably more robust with respect to the accuracy of the
integration, less integration points are needed for suitable results.
GFEM, XFEM, PUM, PUFEM and hp-clouds are based on the concept of
an extrinsic basis. Thereby the order of consistency of an existing PU can be
raised or a priori knowledge about the solution can be added to the solution
spaces. The final system of equations becomes significantly larger. In practice
these methods proved to be successful in very special cases (like the solution of
the Helmholtz equation).
NEM and MFEM rely on shape functions which are constructed based on
Voronoi diagrams (Sibson and non-Sibsonian interpolations). They do not take
the MLS/RKPM way to obtain a certain consistency. It seems to the authors of
this paper that the use of Voronoi diagrams as an essential part of the method is
already something in-between meshfree and mesh-based. This becomes obvious
in the MFEM, which may either be interpreted in a mesh-based way as a method
which employs general polygons as elements, or in a meshfree way because
rather the Voronoi diagram is needed than an explicit mesh. So one might say
that the procedure in a mesh-based method is: node distributionVoronoi
diagrammesh shape functions. In the NEM and MFEM only the mesh
step is skipped, whereas in standard MMs based on the MLS/RKPM concepts
we only have the steps: node distributionshape functions.
Concerning the RKEM, one may expect that the complex nature of the
shape functions in this method will anticipate a breakthrough of this approach
in practice. At least this method provides an answer to the question of how
to find continuous element interpolations. Simple approaches are not available
and the complexity of the RKEM approximations might be the necessary price
to pay.
There are also other MMs which rely on the choice of other certain test
functions which will not be discussed further. Also, it is impossible to even
mention every MM in this paper, however, most of the important and frequently
discussed MMs should be covered.
6
6.1
Related Problems
Essential Boundary Conditions
Due to the lack of Kronecker delta property of most of the meshfree shape functions the imposition of EBCs requires certain attention. A number of techniques
have been developed to perform this task. One may divide the various methods
in those that modify the weak form, those that employ shape functions with
Kronecker delta property along the essential boundary and others. The first
class of methods is described in subsections 6.1.1 to 6.1.4, the second from 6.1.5
to 6.1.8 and other methods not falling into these two classes in 6.1.9 and 6.1.10.
We do only briefly describe the methods mentioning some of their important
advantages and disadvantages, the interested reader is referred to the references
given below..
It is our impression that the imposition of EBCs in MMs is only a solved
problem in the sense that it is easily possible to fulfill the prescribed boundary
values directly at the nodes. However, as e.g. noted in [60], a degradation in the
convergence order may be found for most of the imposition techniques in two
or more dimensions for consistency orders higher than 1.
6.1.1
Lagrangian Multipliers
A very common approach for the imposition of EBCs in MMs is the Lagrangian
multiplier method. It is well-known that in this case the minimization problem
becomes a saddle problem [24]. This method is also used in many other applications of numerical methods (not related to MMs); therefore, it is not described
here in further detail.
The Lagrangian multiplier method is a very general and accurate approach
[17]. However, Lagrangian multipliers need to be solved in addition to the discrete field variables, and a separate set of interpolation functions for Lagrangian
multipliers is required. This set has to be chosen carefully with respect to the
Babuska-Brezzi stability condition [24], which influences the choice of interpolation and the number of used Lagrangian multipliers. In addition to the increase
in the number
of unknowns
the system structure becomes awkward, i.e. it be
K G
instead of only [K]. This matrix is not positive definite
comes
GT 0
and possesses zeros on its main diagonal and solvers taking advantage of positive definiteness cannot be used any longer [18, 103]. Especially for dynamic
and/or nonlinear problems (e.g. [25]) this larger system has to be solved at each
80
Related Problems
time and/or incremental step (in nonlinear problems, incremental and iterative
procedures are required).
6.1.2
Penalty Approach
(ui ui ) d
with >> 1 is added to the weak form of the problem, see e.g. [112]. The
success of this method is directly related to the usage of large numbers for .
This on the contrary influences the condition number of the resulting system
of equations in a negative way, i.e. the system is more and more ill-conditioned
with increasing values for . The advantages of the penalty approach is that the
size of the system of equations is constant and the possible positive definiteness
remains for large enough .
6.1
6.1.4
81
Nitsches Method
6.1.5
82
Related Problems
6.1
83
and the EBCs can directly be applied. One may also interpret the shape functions T = T D1 as the transformed meshfree shape functions having Kronecker delta property. It is important to note that these transformed functions
are not local any longer as D1 is a full matrix.
meshfree region
essential boundary
FE string
Figure 10: Usage of a finite element string along the essential boundary for
imposition of EBCs.
u (x)
N
X
i (x) ui
i=1
6.1.6
Transformation Method
There exist a full and a partial (boundary) transformation method, see e.g. [25,
29, 87]. In the first an inversion of a N N matrix is required and Kronecker
Delta property is obtained at all nodes, and in the latter only a reduced system
has to be inverted and Kronecker delta property is obtained at boundary nodes
only. It has been mentioned in [29] that the transformation methods are usually
used in conjunction with Lagrangian kernels, see subsection 4.5, because then
the matrix inversion has to be performed only once at the beginning of the
computation.
The basic idea of the full transformation method is as follows: The relation
between the (real) unknown function values uh (xj ) and the (fictitious) nodal
values u
bi for which we solve the global system of equations is
uh (xj ) =
N
X
i=1
u = D
u
i (xj ) u
bi ,
(6.1)
P
which follows directly when the approximation uh (x) = i i (x) u
bi is evaluated at all nodal positions xj for j = 1, . . . , N . The final system of equations
which results from a meshfree procedure is A
u = b. However, boundary con . Therefore, u
N
X
i (x) ui +
i=1
N
X
i (x) ui
i=1
= u + u .
u (xj ) =
+
=
g (xj )
i=1 i (xj ) ui
i=1 i (xj ) ui
=
D
u
u
+
D
=
g
| {z }
| {z }
|{z}
(N N ) (N 1)
(N N ) (N 1)
(N 1) ,
1
g D u .
u = D
1
1
and = D
where = D
D
may also be interpreted as the transformed shape functions for which the EBCs can be directly
applied.
6.1.7
It was already realized by Lancaster and Salkauskas when introducing the MLS
in [82] that singular weighting functions at all nodes recover Kronecker delta
84
Related Problems
property of the shape functions. Advantage of this has been made e.g. in [73]
for the easy imposition of EBCs in a Galerkin setting.
Instead of applying singular weight functions at all nodes, in [29] a boundary
singular kernel approach is presented. Here, only the weight functions associated with constrained boundary nodes are singular. By using a singular weight
function at the point xD where an EBC is prescribed, we obtain a shape function D (xD ) = 1 and all other shape functions at xD are i (xD ) = 0. Note
that D (xi ) 6= 0, thus D is not a real interpolating function having Kronecker
delta property (although D (xD ) = 1), because it is not necessarily 0 at all
other nodes [29].
It is claimed in [63] that singular weighting functions lead to less accurate
results, especially for relatively large supports. This is also due to the necessity
to distribute integration points carefully such that they are not too close to
the singularity which leads to ill-conditioned mass matrices. Hence, singular
weighting functions are not recommended.
6.1.8
PUM Ideas
Referring to subsection 5.7, the EBCs can be implemented by choosing the local
approximation spaces such that the functions satisfy the Dirichlet boundary
conditions [8, 9]. For example in [17, 87] Legendre polynomials are used as an
extrinsic basis recovering Kronecker delta property.
6.1
FEM and MMs the belonging line in the matrix will look as
FEM
MM
Boundary Collocation
... = 0 = 1 = 0...
. . . 6= 0 6= 1 =
6 0... .
Thus, one can see the similarity of FEM and MMs in the matrix line which
belongs to a node xi where an EBCs has to be enforced. The difference can be
seen in the right hand side. Here, in the FEM we know already the values of the
line due to the Kronecker Delta property, whereas in MMs one has to compute
all at xi . However, the idea in both methods stays the same.
It is important to note that an important condition is not fulfilled for the
standard boundary collocation method: It is required that the test functions
in a weak form must vanish along the essential boundary [128]. Neglecting
this leads to a degradation in the convergence order, especially for meshfree
shape functions with high consistency orders. Therefore, Wagner and Liu propose a corrected collocation method in [128] which considers the problem of
non-vanishing test functions along the essential boundary. This idea is further
considered and modified in [130].
6.1.10
6.1.9
85
DAlemberts Principle
Using DAlemberts principle for the imposition of EBCs was first introduced by
G
unther and Liu in [58]; this approach has similarities with the transformation
methods (subsection 6.1.6). DAlembert was the first to formulate a principle
of replacing n differential equations of the form
d
+ f int d, d = f ext + f r
f inert d, d,
and m constraints
g (d) = 0
by n m unconstrained equations. Herein, f inert are inertial forces, f int and
f ext are internal and external forces respectively and f r are the reaction forces
which can be written as f r = G, where GT = g(y)
dT is the constraint matrix
86
Related Problems
6.2
Integration
87
With this idea it is also possible to impose BCs in Galerkin methods, hence
also in MMs. Here, we might have a system of the kind [57]
Z
Z
Z
Z
wu (. . .) + wu,x (. . .) d =
wu hd +
w (u g) d +
wu d .
g
g
|
{z
}
| h {z }
|
| {z }
{z
}
f ext
g (d) = 0
fr
f inert + f int
Discretization leads to a matrix equation and application of dAlemberts principle is done analogously to the above shown case. It only remains to find a
suitable (n m) 1 vector y of generalized variables. The mapping from the
generalized to the nodal variables d i.e. the Jacobian matrix J can be obtained via an orthogonalization procedure, e.g. with help of the Gram-Schmidt
algorithm. Consequently JT G = 0 y will be fulfilled and also JT J = I.
6.2
Integration
Using the method of weighted residuals leads to the weak form of a PDE problem. The expressions consist of integration terms which have to be evaluated
numerically. In MMs this is the most time-consuming part although being
parallelizable, see subsection 6.6, as meshfree shape functions are very complicated and a large number of integration points is required in general. Not
only that the functions are rational, they also have different forms in each small
I,k (see Fig. 11) where the same nodes have influence respectively. As
region
Figure 11: Overlap of circular supports in a regular two-dimensional node distri I of node I is highlighted. The differently colored regions
bution; the support
I,k of this support have different influencing nodes and consequently a different
rational form of the shape function.
a consequence, especially the derivatives of the shape functions might have a
I,k . In Fig. 11 each colored
complex behaviour along the boundaries of each
I,k of a certain support
I.
area stands for a region
It is an important advantage of the collocation MMs that they solve the
strong form of the PDE and no integral expressions have to be calculated. However, the disadvantages of these methods have been mentioned, see e.g. subsection 5.1.
Numerical integration rules Numerical integration rules are of the form
Z
X
f (x) d =
f (xi ) wi
and vary only with regard to the locations xi and weights wi of the integration points (note that the integration weights wi have nothing to do with the
MLS weighting function or RKPM window function). Available forms include
for example Gaussian integration and trapezoidal integration. (Monte-Carlo
integration may also be considered as an interpretation of integration in collocation MMs).
Gaussian integration rules are most frequently used for the integration in
MMs. They integrate polynomials of order 2nq 1 exactly where nq is the
88
Related Problems
number of quadrature points. The special weighting of this rule makes only
sense if the meshfree integrands are sufficiently polynomial-like in the integration
domains. That means if the integration domains in which the integration points
are distributed according to the Gauss rule are small enough such that the
rational, sometimes non-smooth character of the meshfree integrands is of less
importance, then suitable results may be expected.
Otherwise, if the rational non-smooth character of the integrand is of importance, e.g. in the case where the integrand (being a product of a test and
trial function) is zero in part of the integration area, the trapezoidal rule may
be preferred.
6.2
89
Integration
This, however, assumes an exact integration which is not possible for the rational
test and trial functions of MMs. Therefore, the divergence theorem is not fully
correct by means of the integration error with the consequence that the
patch test is not longer exactly fulfilled (which is related to a loss of consistency).
The patch test may thus only be exactly (with machine precision) fulfilled as
long as the problem is given in the so-called Euler-Lagrange form
Z
w (Lu f ) d = 0,
however, not after manipulations with the divergence theorem. In the EulerLagrange form the integration error plays no role and the patch test is fulfilled
exactly for all numerical integration rules.
In spite of these remarks it is in general not difficult in practice to employ
integration rules that lead to reasonable accurate solutions. In the following,
approaches for the numerical integration of the weak form in MMs are described.
6.2.1
Here, the domain is divided into integration domains over which Gaussian
quadrature is performed in general. The resulting MMs are often called pseudomeshfree as only the approximation is truly meshfree, whereas the integration
90
Related Problems
6.3
91
Here, it shall be recalled that the support of the shape function i (x) is
equivalent to the support of the weighting function wi (x) and can be of arbitrary
shape. But for the proposed bounding box method parallelepipeds producable
by tensor product weighting functions, see subsection 4.5 must be taken as
support regions, because the alignment of integration cells with spherical supports is almost impossible [39] (see e.g. Fig. 11). Therefore tensor product based
supports have to be used here, because then the overlapping supports construct
several polygons, for which integration rules are readily available.
Griebel and Schweitzer go the same way in [53] using sparse grid integration
rules [51] in the intersecting areas.
The use of adaptive integration by means of adaptively refining the mesh
(which does not have to be conforming) or cell structure has been shown in
[123].
6.2.3
requires some kind of mesh. In case of a background mesh, nodes and integration
cell vertices coincide in general as in conventional FEM meshes, however, it
is important to note that the background mesh does not have to be conforming
and hanging nodes may easily be employed. In case of integration with a cell
structure, nodes and integration cell vertices do in general not coincide at all
[39]. This is depicted in Fig. 12.
This method is a natural choice for the MLPG methods based on the local
weak form but may also be used for any other of the Galerkin MMs. The
resulting scheme is truly meshfree. The domain of integration is directly the
support of each node or even each intersection of the supports respectively. The
results in the latter case are much better than in the classical mesh or cell-based
integration of the pseudo-meshfree methods for the same reason as in the above
mentioned closely related alignment technique.
The problem of background meshes and cells is that the integration error
which arises from the misalignment of the supports and the integration domains
is often higher than the one which arises from the rational character of the shape
functions [39]. Accuracy and convergence are thus affected mostly from this
misalignment and it might be possible that even higher order Gaussian rules
do not lead to better results [39]. Note that in case of the FEM supports and
integration domains always coincide.
Special Gauss rules and mappings can be used to perform efficient integration
also for spheres intersecting with the global boundary of the domain, thereby
being not regular any longer, see e.g. [37]. The principle is shown in Fig. 14.
6.3
It is often desirable to limit the use of MMs to some parts of the domain where
their unique advantages meshfree, fast convergence, good accuracy, smooth
92
Related Problems
ion
rat
eg a
int are
integration
area
6.3
derivatives, trivial adaptivity are beneficial. This is because often the computational burden of MMs is much larger than in conventional mesh-based methods, thus coupling can save significant computing time. The objective is always
to use the advantages of each method.
We only refer to coupling procedures where coupled shape functions result.
Physically motivated ad hoc approaches often aiming at conservation of mass,
volume and/or momentum such as those used for the coupling of FEM and
SPH, see [72] and references therein, without accounting for consistency aspects
etc. are not further considered herein.
There are several methodologies to couple meshfree and mesh-based regions.
6.3.1
subdomain
unit circle
mapping
93
Coupling with a ramp function was introduced by Belytschko et al. in [20]. The
domain is partitioned into disjoint domains el and MM with the common
boundary MM . el is discretized by standard quadrilateral finite elements
and is further decomposed into the disjoint domains ? being the union of
all elements along MM , also called transition area and the
T remaining part
FEM , connected by a boundary labeled FEM ; clearly FEM MM = . This
situation is depicted in Fig. 15.
P
The mesh-based approximation uFEM (x) = iI FEM Ni (x) u (xi ) is defined
in el , i.e. complete bilinear shape functions
are defined over all elements. The
P
meshfree approximation uMM (x) = iI MM i (x) u (xi ) may be constructed
for all nodes in with meshfree shape functions as they for example arise in
the MLS method. However, one may also restrict
nodes I MM where meshfree
S the
MM
?
.
shape functions are employed to at least
The resulting coupled approximation according to the ramp function method
is defined as [20]
uhi (x) = uFEM
(x) + R (x) uMM
(x) uFEM
(x)
i
i
i
(x) + R (x) uMM
(x) ,
= (1 R (x)) uFEM
i
i
global boundary
94
Related Problems
6.3
6.3.2
=
el
FEM
FEM
MM
FEM
MM
*
MM
0.6
0.4
0.2
0
FEM
*
MM
1
0.8
function values
0.8
function values
0.6
FEM
MM
domain
MM
Coupling with reproducing conditions was introduced by Huerta et al. in [64, 65].
Compared to coupling with ramp functions, this approach has the important
advantage that a coupled PU with consistency of any desired order may be constructed, whereas the ramp function only achieves first order consistent PUs.
The same discretization of the domain into areas FEM , ? and MM and
boundaries FEM and MM as described in subsection 6.3.1 and Fig. 15 is assumed.
An important difference of this approach is that the mesh-based approximation with FE shape functions in only complete in FEM and not in ? . In ? ,
only the FE shape functions of the nodes along FEM remain and are left unchanged throughout the coupling procedure; there are no FE shape functions of
the nodes along MM , these nodes may be considered deactivated FEM nodes.
Meshfree shape functions
are constructed e.g. with the MLS technique for
S
the nodes in MM ? \FEM .
In this approach shape functions in FEM are provided by FEM shape functions only and in MM by meshfree techniques only. A coupling of the shape
functions takes only place in ? . There we write for the mixed approximation
0.4
0.2
u (x)
0
FEM
95
FEM
FEM
MM
MM
domain
Figure 16: Resulting shape functions of the coupling approach with ramp function (subsection 6.3.1) and with reproducing conditions (subsection 6.3.2).
iI FEM
The objective now is to develop a mixed functional interpolation, with the desired consistency in ? , without any modification of the FE shape functions
[64, 65]. Thus, we want to deduce how to modify the meshfree approximation functions i in the presence of the (incomplete) FE shape functions. In
section 4 many ways have been shown how to find an arbitrary order consistent meshfree approximation, e.g. via the MLS idea, Taylor series expansion
etc. Here, we can
P with the modified total approxiPemploy the same ideas
(x)
u
(x
)
+
mation,
which
is
i
iI MM i
iI FEM Ni (x) u (xi ) instead of just
P
(x)
u
(x
).
i
iI MM i
P In the following we do not always separate
P between sum over meshfree
PN nodes
iI MM and sum over mesh-based nodes
iI FEM , but just write
i=1 , where
N is the total number of the nodes. This can be assumed without loss of
/ I FEM and i = 0 for i
/ I MM . Here, the
generality, when Ni = 0 for i
modified meshfree shape functions are deduced via the Taylor series expansion
fully equivalent to subsection 4.2.2 where this is shown in detail. At this point
96
Related Problems
uh (x)
i (x) u (xi ) +
iI MM
N
X
i=1
N
X
i=1
6.3
MM
Ni (x) u (xi )
iI
iI FEM
X
(xi x)
D u (x)
||!
||=0
"
N
X
1
2
(xi x) a1 (x) w (x xi ) + (xi x) a2 (x) w (x xi ) +
=
i=1
(xi x)
D u (x) + . . . +
|k |!
1
D u (x) + . . .
!#
uh (x)
iI MM
pT (xi x)
p (0)
X
"
N
X
N
X
i=1
N
X
I=1
Ni (x) p (xi x) .
Ni (x) u (xi ) .
Rearranging and applying the shifting procedure of the basis argument with +x
gives
!"
#1
X
X
X
h
T
T
T
u (x) =
p (x)
Ni (x) p (xi )
w (x xi ) p (xi ) p (xi )
iI MM
(x x)2
i
1
k
(xi x) a1 w (xxi )+. . .+(xi x) ak w (xxi )+Ni (x)
= 0
|2 |!
..
.
. = ..
(x x)
i
|k |!
#1
iI FEM
(x x)1
i
(xi x) a1 w (xxi )+. . .+(xi x) ak w (xxi )+Ni (x)
= 1
|1 |!
iI FEM
iI FEM
iI MM
Ni (x) u (xi )
iI FEM
i=1
iI
w (x xi ) p (xi x) pT (xi x)
iI MM
p (xi ) w (x xi ) u (xi ) +
1
Ni (x) p (xi x)
FEM
N
X
iI FEM
pT (xi x) a (x) w (x xi ) + Ni (x)
(xi x)
|1 |!
1
0
..
.
= p (0)
97
= 0.
iI MM
p (x)
iI FEM
Ni (x) u (xi ) ,
iI FEM
where the moment matrix M (x) remains unchanged from the previous definition in section 4. The resulting coupled set of shape functions in one dimension is shown in the right part of Fig. 16. The shape functions i in the
transition area are hierarchical [64, 65], because for any node xk the right
98
Related Problems
6.3
99
hand side ofPthe system of equations becomes zeroPwhich can be seen easily: p (0) iI FEM Ni (xk ) p (xi xk ) = p (0) iI FEM ik p (xi xk ) =
p (0) p (xk xk ) = 0.
Concerning the continuity of the coupled approximation it is found [64, 65]
that the coupled shape functions are continuous if, first, the same order of
consistency is imposed all over (i.e. for both FEs and particles), namely,
nMM = nFEM . And second, the supports of the particles I MM coincides exactly
with the region where FEs do not have a complete basis. That is, no particles
are added in complete FEs (i.e. elements where no node has been suppressed).
Moreover, weighting functions are chopped off in those complete FEs.
nite element bases {Ni (x)}iI F E and a complete meshfree interpolation on the
whole domain {i (x)}iI MM , which is in contrast to the approach of Huerta, see
subsection 6.3.2, where particles may only be introduced where the FE interpolation is incomplete. For the bridging scale method the necessity to evaluate
meshfree shape functions in the whole domain obviously alleviates an important
argument for coupling, which is the reduction of computing time.
It shall be mentioned that the above procedure can also be used to enrich the
FE approximation with particle methods. For example, the following adaptive
process seems attractive: compute an approximation with a coarse FE mesh, do
an a posteriori error estimation and improve the solution with particles without
any remeshing process [65].
(6.2)
This bridging scale method has been introduced for coupling in the RKPM
context in [100], also discussed in [129]. Starting point is an incomplete fi-
Firstly, the viewpoint taken in the bridging scale method shall be briefly
outlined. One wants to hierarchically decompose a function u (x) based on
some projection operator P as
u = Pu + w Pw,
P
The meshfree interpolation w (x) =
iI MM i (x) u (xi ) is thought of as an
enrichment of this FE basis. For the projection Pw of the meshfree interpolation
w onto the finite element basis follows analogously
X
Nj (x) w (x) ,
Pwh (x) =
jI FEM
Nj (x)
jI FEM
i (xj ) u (xi ) .
iI MM
Inserting now these definitions of Pu, w and Pw into equation 6.2 gives
X
X
X
i (x)
u (x) =
Ni (x) u (xi ) +
Nj (x) i (xj ) u (xi ) , (6.3)
iI FEM
iI MM
jI FEM
For a consistency proof of this formulation see [129]. In [66] the bridging
scale method has been compared with the coupling approach of Huerta [65], see
subsection 6.3.2. An important difference is that the term i (xj ) in equation
6.3 is constant, whereas for the coupling approach of Huerta it may be shown
100
Related Problems
after modifications of the structure of the resulting expressions that the analogous term is a function of x [66]. Furthermore, in the bridging scale method
in order to ensure the continuity of the approximation particles for the construction of the meshfree basis have to cover the whole domain. In contrast,
in the approach of Huerta particles are only needed in areas, where the FEM
shape functions are not complete. The continuity consideration is directly related to problems of the bridging scale method for the imposition of EBCs: The
resulting meshfree shape functions are only zero at the finite element nodes,
however, not along element edges/faces along the boundary in 2D/3D respectively. Therefore, it is not surprising that the approach of Huerta turns out to
be superior in [66].
6.3.4
The coupling approach with Lagrangian multipliers couples two distinct domains, one for the FE-part and the other for the MM-part via the weak form
[63]. Consequently, this approach is very different to the previously discussed
approaches, no coupled shape functions with a certain order of consistency are
developed. As for a purely MM approach, it was shown that the rates of convergence for a combination of MMs and FE can exceed those of the FEM. This
method shares all the disadvantages as mentioned for the impositions of EBCs
with Lagrangian multipliers, see subsection 6.1.1.
6.4
6.4
MMs need certain techniques to handle these discontinuities. Classical meshbased methods have problems to handle these problems, because there the dis-
nonconvex
boundary
I
support of node I
Figure 17: One has to be careful for non-convex boundaries. The support of
node I should be modified, therefore, the same methods as for discontinuity
treatment may be used.
continuity must align with element edges; although also for these methods ways
have been found to overcome this problem (e.g. [19]).
It should be mentioned that the treatment of discontinuities has similar
features than the treatment of non-convex boundaries, see Fig. 17. We cite
from [63]:
One has to be careful with performing MLS for a domain which
is strongly non-convex. Here, one can think of a domain with a
sharp concave corner. To achieve that MLS is well defined for such
a domain and to have that the shape functions are continuous on the
domain, it is possible that shape functions become non-zero on parts
of the domain (think of the opposite side of the corner) where it is
more likely that they are zero. Hence, nodal points can influence
the approximant uh on parts of the domain where it is not really
convenient to have this influence.
Discontinuities
101
Discontinuities
Here, we divide methods which modify the supports along the discontinuity,
see subsection 6.4.1 to 6.4.3, and thus which incorporate discontinuous approximations as an enrichment of their basis functions, see subsection 6.4.4. See [116]
for an interesting comparison of the methods which modify the supports.
6.4.1
Visibility Criterion
The visibility criterion, introduced in [18], may be easily understood by considering the discontinuity opaque for rays of light coming from the nodes. That
is, for the modification of a support of node I one considers light coming from
102
Related Problems
the coordinates of node I and truncates the part of the support which is in the
shadow of the discontinuity. This is depicted in Fig. 18.
6.5
h-Adaptivity
6.4.3
103
Transparency Method
A major problem of this approach is that at the discontinuity tips an artificial discontinuity inside the domain is constructed and the resulting shape
functions are consequently not even C 0 continuous. Convergence may still be
reached [78], however, significant errors result and oscillations around the tip
can occur especially for larger dilatation parameters [116]. The methods discussed in the following may be considered as fixed versions of the short comings
of the visibility criterion and show differences only at the treatment around the
discontinuity tips.
It shall further be mentioned that for all methods that modify the support
which in fact is somehow a reduction of the prior size there may be problems
in the regularity of the k k system of equations, see subsection 4.6, because less
supports overlap with the modified support. Therefore, it may be necessary to
increase the support size leading to a larger band width of the resulting system
of equations. This aspect has been pointed out in [21].
where s0 = kx xI k, I is the dilatation parameter of node I, sc is the intersection of the line xxI with the discontinuity and sc is the distance from the crack
tip where the discontinuity is completely opaque. For nodes directly adjacent
to the discontinuity a special treatment is proposed [116]. The value of this
approach is also a free value which has to be adjusted with empirical arguments.
The resulting derivatives are continuous also at the crack tip.
6.4.2
Diffraction Method
The diffraction method [16, 116] considers the diffraction of the rays around
the tip of the discontinuity. For the evaluation of the weighting function at a
certain evaluation point (usually an integration point) the input parameter of
w (kx xI k) = w (dI ) is changed in the following way: Define s0 = kx xI k,
s1 being the distance from the node to the crack tip, s1 = kxc xI k, and s2
the distance from the crack tip to the evaluation point, s2 = kx xc k. Then
we change dI as [116]
s1 + s 2
s0 ;
dI =
s0
in [16] only = 1, i.e. dI = s1 + s2 = kxc xI k + kx xI k, has been proposed.
Reasonable choices for are 1 or 2 [116], however, optimal values for are not
available and problem specific. The derivatives of the resulting shape function
is not continuous directly at the crack tip, however, this poses no difficulties as
long as no integration point is placed there [116].
The modification of the support according to the diffraction method may be
seen in Fig. 18. A natural extension of the diffraction method for the case of
multiple discontinuities per support may be found in [110].
6.4.4
PUM Ideas
Belytschko et al. propose in [21] a discontinuous enrichment of the approximation by means of including a jump function along the discontinuity and a
specific solution at the discontinuity tip in the extrinsic basis. Consequently,
this method can be considered a member of the PUMs, see subsection 5.7. For
similar approaches see also [19, 77].
6.5
h-Adaptivity
In adaptive simulations nodes are added and removed over subsequent iteration
steps. The aim is to achieve a prescribed accuracy with minimal number of
nodes or to capture a local behavior in an optimal way. Mesh-based methods
such as the FEM require a permanent update of the connectivity relations, i.e. a
conforming mesh must be maintained. Automatic remeshing routines, however,
may fail in complex geometric situations especially in three dimensions; these
aspects have already been mentioned in subsection 5.10. In contrast, MMs
seem ideally suited for adaptive procedures as they naturally compute the node
connectivity at runtime.
Most adaptive algorithms of MMs proceed as follows:
104
Related Problems
6.5
h-Adaptivity
105
Error indication/estimation: In this step the error of a previous iteration (or time) step is estimated a posteriori. Regions of the domains are
identified where the error is relatively large and refinement is most effectively. For a description of error indicators/estimators and related ideas in
MMs, see e.g. [35, 50, 95, 105]. We do not go into further detail, because
the principles of error indicators (residual-based, gradient-based, multiscale decomposition etc.) are comparable to those of standard mesh-based
methods.
visibility criterion
Construction of a (local) Voronoi diagram: A Voronoi diagram is constructed with respect to the current node distribution in the region identified by the error estimator.
modified
support
xI
Insertion of particles: The Voronoi diagram is used as a geometrical reference for particle insertion, i.e. particles are added at the corners of the
Voronoi diagram. Additionally, the Voronoi diagram may be used to build
efficient data structures for the newly inserted nodes [101], for example
simplifying neighbour-searching procedures.
line of discontinuity
artificial line of discontinuity
produced by visibility criterion
diffraction method
transparency method
modified
support
modified
support
xI
xI
s1
s0
line of discontinuity
xc
s2
x
s0
line of discontinuity
x
sc
sc
Figure 18: Visibility criterion, diffraction and transparency method for the
treatment of discontinuities.
106
Related Problems
A noteworthy idea for an adaptive procedure with coupled FEM/EFG approximation according to the coupling approach of Huerta [65], discussed in
6.3.2, has been showed in [45]. There, first an approximation is computed with
a finite element mesh which is followed by an error estimation and finally, elements with large error are indicated, removed and replaced by particles, which
may easily be refined in subsequent steps. Thereby, one makes very selectively
profit of the advantageous properties of MMs in adaptive procedures.
6.6
Parallelization
In this subsection we follow Danielson, Hao, Liu, Uras and Li [36]. The parallelization is done with respect to the integration which needs significantly more
computing time than classical mesh-based methods. This problem
may be considered to be trivially parallelizable with a complexity of O Np where p is the
number of processors and N the number of integration points [55]. The parallelization of the final system of equations is not considered here as this is not
necessarily a MM specific problem, remarks for this problem may be found in
[55].
The basic principle of parallel computing is to balance the computational
load among processors while minimizing interprocessor communication (build
partitions of same size, minimize partition interface). The first phase of a
parallel computation is the partitioning phase. In MMs, integration points are
distributed to processors and are uniquely defined there. To retain data locality,
particles are redundantly defined (duplicated) on all processors possessing integration points contributing to these particles. Parallel SPH implementations
typically partition the particles (which are identical to the integration points),
whereas for Galerkin MMs partitioning is done on the integration points. Partitioning procedures may be based on
graph theory: Each integration point has a list of support particles within
its DOI that it must provide a contribution. Integration points in MMs
typically contribute to many more nodes than those of similar FE models
do, thus, the graphs can be very large with many edges. Reduction of the
graph is possible by considering only the nearest particles (geometrical
criteria). Also the reduced graph results into nearly perfect separated
partitions. In all cases, no significant reduction in performance occurred by
using the reduced graph partitioning instead of the full graph partitioning.
geometric technique: They only require the integration point coordinates
for partitioning. Groups of integration points are built in the same spatial
6.7
107
6.7
In this subsection the solution of the total system of equations and not the
k k systems of the MLS/RKPM procedure shall be briefly discussed. The
regularity of the k k matrix inversions of the MLS or RKPM for the construction of a PU does not ensure the regularity of the global system of equations,
hence the solvability of the discrete variational problem. The matrix for the
global problem may for example be singular if a numerical integration rule for
the weak form is employed which is not accurate enough.
Moreover, in enrichment cases, the shape functions of the particles are not
always linearly independent, unless they are treated in a certain way (e.g. eliminating several interpolation functions) [65]. Additionally, the global matrix may
be ill-conditioned for various distributions of particle points, when the particle
distributions led already to ill-conditioned k k matrices. These topics have
already been discussed in subsections 4.4, 4.6 and 6.2 respectively and are not
repeated here.
When using MMs with intrinsic basis only, the size of the global matrix
is the same for any consistency. For a large value of consistency order n, the
increased amount of computational work lies only in the solution of the k k
108
Related Problems
109
6.8
In this section we discussed some problems which frequently occur in MMs and
gave reason for various publications.
Many different ideas have been introduced for the handling of essential
boundary conditions in MMs. The lack of the Kronecker delta property in
meshfree methods makes this topic important, in contrast to mesh-based methods where EBCs can be imposed trivially. Methods have been shown which
e.g. work with modified weak forms or coupled/manipulated shape functions
achieving Kronecker delta property at the boundary. Advantages and disadvantages have already been mentioned previously in each of the subsections.
Integration has been intensively discussed in subsection 6.2. Integration is
the most time-consuming part in a MM calculation due to the large number of
integration points needed for a sufficiently accurate evaluation of the integrals
in the weak form. In collocation MMs the weak form reduces to the strong form
and integration is not needed which is their main advantage. Galerkin MMs
with nodal integration are closely related to collocation methods. Accuracy and
stability are the weak points of both collocation MMs and Galerkin MMs with
nodal integration.
The accuracy of full integration compared to nodal integration is considerably higher, see e.g. [15]. Integration with background mesh or cell structure
has been proposed for the earlier MMs. The method can then be considered
In subsection 6.3 several methods have been discussed for the coupling of
mesh-based methods with meshfree methods. The aim is always to combine
the advantages of each method, above all the computational efficiency of meshbased with the unnecessity of maintaining a mesh in MMs. Still today MMs turn
out to be used only in rather special applications, like crack growth, which is
due to their time-consuming integration. For engineering applications like flow
simulation, structural dynamics etc. it seems not probable that MMs are used
for the simulation in the whole domain as a standard tool. It is our belief that
only in combination with mesh-based methods we can expect to use MMs with
practical relevance in these kind of problems. Consider e.g. a problem where in
a heart-valve simulation the arteries itself is modeled with FEM while MMs are
used only in the valve region where a mesh is of significant disadvantage due
to the large geometrical changes of the computational domain. In our opinion,
the coupling of mesh-based and meshfree methods is an essential aspect for the
success of MMs in practical use.
Methods for the treatment of discontinuities in MMs are discussed in 6.4,
separating the approaches into two different principles: Thus which modify the
supports of the weighting functions (and resulting shape functions) and thus
that use PUM ideas (see subsection 6.1.8) to enrich the approximation basis in
order to reflect the special solution characteristics around a discontinuity.
In subsection 6.6 and 6.7 some statements are given for the parallelization
and the solution of the global system of equations in MMs.
Conclusion
In this paper an overview and a classification of meshfree methods has been presented. The similarities and differences between the variety of meshfree methods
have been pointed out.
Concepts for a construction of a PU are explained with emphasis on the
MLS and RKPM. It turned out that MLS and RKPM are very similar but not
110
Conclusion
identical (see subsection 4.3). Often it does not make sense to overemphasize
the aspect of a method to be based on MLS or RKPM, especially if the slight
difference between the MLS and RKPM is not pointed out or used. One should
keep in mind that all the MMs in section 5 both work based on MLS and RKPM,
but there is a certain difference between these concepts.
MLS and RKPM are separated from the resulting MMs themselves. It was
shown that in case of approximations with intrinsic basis only, the PU functions
are directly the shape functions, and thus the separation might be called somewhat superfluous in this case. However, to base our classification on exclusive
properties, we believe that constructing a PU and choosing an approximation
are two steps in general. This is obvious for cases of approximations with an
additional extrinsic basis.
The MMs themselves have been explained in detail, taking into account the
different viewpoints and origins of each method. Often, we have focused on
pointing out important characteristic features rather than on explaining how
the method functions. The latter becomes already clear from sections 3 and 4.
We found that SPH and DEM are problematical choices for a MM, the former
due to the lack of consistency and the latter due to the negligence of some
derivative terms. A number of corrected versions of the SPH exists which fix
this lack of consistency, and the EFG method may also be viewed as a fixed
version of the DEM. SPH and EFG may be the most popular MMs in practice.
The first is a representative of the collocation MMs which do not require a
time-consuming integration but may show accuracy and stability problems; the
latter is a representative of the Galerkin MMs which solve the weak form of
a problem with a comparably high accuracy in general, however, requiring an
expansive integration. MMs with an extrinsic basis are representatives of the
PUM idea; the GFEM, XFEM, PUFEM etc. fall into this class, too. The LBIE
is the meshfree equivalent of the boundary element methods and may be used
efficiently for problems where fundamental solutions are known. Some MMs
which are not based on the MLS/RKPM principle, like the NEM and MFEM
have also been discussed. However, it was not possible to present a complete
description of all available MMs.
This paper also discusses intensively problems which are related to MMs.
The disadvantage of MMs not to be interpolating in general makes the imposition of EBCs awkward and many techniques have been developed for this
purpose. Procedures for the integration of the weak form in Galerkin MMs have
been shown. Coupling meshfree and mesh-based methods is a very promising
way as advantages of each method can be used where they are needed. Aspects
of discontinuity treatment, parallelization and solvers have also been discussed.
REFERENCES
111
We hope this paper to be a helpful tool for the readers successful work with
Meshfree Methods.
References
[1] Aluru, N.R.: A point collocation method based on reproducing kernel
approximations. Internat. J. Numer. Methods Engrg., 47, 1083 1121,
2000.
[2] Atluri, S.N.; Cho, J.Y.; Kim, H.-G.: Analysis of Thin Beams, Using the
Meshless Local Petrov-Galerkin Method, with Generalized Moving Least
Squares Interpolations. Comput. Mech., 24, 334 347, 1999.
[3] Atluri, S.N.; Kim, H.-G.; Cho, J.Y.: A Critical Assessment of the Truly
Meshless Local Petrov-Galerkin (MLPG), and Local Boundary Integral
Equation (LBIE) Methods. Comput. Mech., 24, 348 372, 1999.
[4] Atluri, S.N.; Shen, S.: The Meshless Local Petrov-Galerkin (MLPG)
Method. Tech Science Press, Stuttgart, 2002.
[5] Atluri, S.N.; Sladek, J.; Sladek, V.; Zhu, T.: The Local Boundary Integral
Equation (LBIE) and its Meshless Implementation for Linear Elasticity.
Comput. Mech., 25, 180 198, 2000.
[6] Atluri, S.N.; Zhu, T.: A New Meshless Local Petrov-Galerkin (MLPG)
Approach in Computational Mechanics. Comput. Mech., 22, 117 127,
1998.
[7] Atluri, S.N.; Zhu, T.: New concepts in meshless methods. Internat. J.
Numer. Methods Engrg., 47, 537 556, 2000.
[8] Babuska, I.; Banerjee, U.; Osborn, J.E.: Meshless and generalized finite
element methods: A survey of some major results. In Meshfree Methods
for Partial Differential Equations. (Griebel, M.; Schweitzer, M.A., Eds.),
Vol. 26, Springer Verlag, Berlin, 2002.
[9] Babuska, I.; Banerjee, U.; Osborn, J.E.: Survey of meshless and generalized finite element methods: A unified approach. Technical Report 02-40,
TICAM, The University of Texas at Austin, 2002.
[10] Babuska, I.; Melenk, J.M.: The partition of unity finite element method.
Technical Report BN-1185, Institute for Physical Science and Technology,
University of Maryland, 1995.
112
REFERENCES
[11] Babuska, I.; Melenk, J.M.: The Partition of Unity Method. Internat. J.
Numer. Methods Engrg., 40, 727 758, 1997.
[12] Beissel, S.; Belytschko, T.: Nodal integration of the element-free Galerkin
method. Comp. Methods Appl. Mech. Engrg., 139, 49 74, 1996.
[13] Belikov, V.V.; Ivanov, V.D.; Kontorovich, V.K.; Korytnik, S.A.; Semenov,
A.Y.: The non-Sibsonian interpolation: a new method of interpolation of
the values of a function on an arbitrary set of points. Comp. Math. Math.
Phys., 37, 9 15, 1997.
[14] Belytschko, T.; Guo, Y.; Liu, W.K.; Xiao, S.P.: A unified stability analysis
of meshless particle methods. Internat. J. Numer. Methods Engrg., 48,
1359 1400, 2000.
[15] Belytschko, T.; Krongauz, Y.; Dolbow, J.; Gerlach, C.: On the completeness of meshfree particle methods. Internat. J. Numer. Methods Engrg.,
43, 785 819, 1998.
[16] Belytschko, T.; Krongauz, Y.; Fleming, M.; Organ, D.; Liu, W.K.S.:
Smoothing and accelerated computations in the element free Galerkin
method. J. Comput. Appl. Math., 74, 111 126, 1996.
[17] Belytschko, T.; Krongauz, Y.; Organ, D.; Fleming, M.; Krysl, P.: Meshless Methods: An Overview and Recent Developments. Comp. Methods
Appl. Mech. Engrg., 139, 3 47, 1996.
[18] Belytschko, T.; Lu, Y.Y.; Gu, L.: Element-free Galerkin Methods. Internat. J. Numer. Methods Engrg., 37, 229 256, 1994.
[19] Belytschko, T.; Moes, N.; Usui, S.; Parimi, C.: Arbitrary discontinuities
in finite elements. Internat. J. Numer. Methods Engrg., 50, 993 1013,
2001.
REFERENCES
113
[23] Braun, J.; Sambridge, M.: A numerical method for solving partial differential equations on highly irregular evolving grids. Nature, 376, 655
660, 1995.
[24] Brezzi, F.: On the existence, uniqueness and approximation of saddlepoint problems arising from Lagrange multipliers. RAIRO Anal. Numer.,
R-2, 129 151, 1974.
[25] Chen, J.-S.; Pan, C.; Wu, C.-I.: Large Deformation Analysis of Rubber
based on a Reproducing Kernel Particle Method. Comput. Mech., 19, 211
227, 1997.
[26] Chen, J.S.; Han, W.; You, Y.; Meng, X.: A reproducing kernel method
with nodal interpolation property. Internat. J. Numer. Methods Engrg.,
56, 935 960, 2003.
[27] Chen, J.S.; Liu, W.K. (eds.): Meshless particle methods. Comput. Mech.,
25(2-3, special issue), 99 317, 2000.
[28] Chen, J.S.; Liu, W.K. (eds.): Meshless methods: Recent advances and
new applications. Comp. Methods Appl. Mech. Engrg., 193(12-14, special
issue), 933 1321, 2004.
[29] Chen, J.S.; Wang, H.-P.: New Boundary Condition Treatments in Meshfree Computation of Contact Problems. Comp. Methods Appl. Mech. Engrg., 187, 441 468, 2000.
[30] Chen, J.S.; Wu, C.T.; You, Y.: A Stabilized Conforming Nodal Integration for Galerkin Mesh-free Methods. Internat. J. Numer. Methods Engrg.,
50, 435 466, 2001.
[20] Belytschko, T.; Organ, D.; Krongauz, Y.: A Coupled Finite Element
Element-free Galerkin Method. Comput. Mech., 17, 186 195, 1995.
[31] Chen, J.S.; Yoon, S.; H.P. Wang, W.K. Liu: An improved reproducing
kernel particle method for nearly incompressible finite elasticity. Comp.
Methods Appl. Mech. Engrg., 181, 117 145, 2000.
[21] Belytschko, T.; Ventura, G.; Xu, J.X.: New methods for discontinuity
and crack modeling in EFG. In Meshfree Methods for Partial Differential
Equations. (Griebel, M.; Schweitzer, M.A., Eds.), Vol. 26, Springer Verlag,
Berlin, 2002.
[32] Chen, T.; Raju, I.S.: Coupling finite element and meshless local PetrovGalerkin methods for two-dimensional potential problems. AIAA 20021659, NASA Langley Research Center, Hampton, USA, 2002.
[22] Bonet, J.; Kulasegaram, S.: Correction and Stabilization of Smooth Particle Hydrodynamics Methods with Applications in Metal Forming Simulations. Internat. J. Numer. Methods Engrg., 47, 1189 1214, 2000.
[33] Choe, H.J.; Kim, D.W.; Kim, H.H.; Kim, Y.: Meshless Method for the
Stationary Incompressible Navier-Stokes Equations. Discrete and Continuous Dynamical Systems - Series B, 1(4), 495 526, 2001.
114
REFERENCES
REFERENCES
115
[34] Choi, Y.J.; Kim, S.J.: Node Generation Scheme for the MeshFree Method
by Voronoi Diagram and Weighted Bubble Packing. Fifth U.S. National
Congress on Computational Mechanics, Boulder, CO, 1999.
[46] Fern
andez-Mendez, S.; Huerta, A.: Imposing essential boundary conditions in mesh-free methods. Comp. Methods Appl. Mech. Engrg., 193,
1257 1275, 2004.
[35] Chung, H.J.; Belytschko, T.: An error estimate in the EFG method.
Comput. Mech., 21, 91 100, 1998.
[36] Danielson, K.T.; Hao, S.; Liu, W.K.; Uras, R.A.; Li, S.: Parallel computation of meshless methods for explicit dynamic analysis. Internat. J.
Numer. Methods Engrg., 47, 1323 1341, 2000.
[37] De, S.; Bathe, K.J.: The Method of Finite Spheres. Comput. Mech., 25,
329 345, 2000.
[48] Fries, T.P.; Matthies, H.G.: Meshfree Petrov-Galerkin Methods for the
Incompressible Navier-Stokes Equations. In Meshfree Methods for Partial
Differential Equations. (Griebel, M.; Schweitzer, M.A., Eds.), Springer
Verlag, Berlin, 2004 (to appear).
[38] Dilts, G.A.: Moving-Least-Squares-Particle Hydrodynamics I. Consistency and Stability. Internat. J. Numer. Methods Engrg., 44, 1115
1155, 1999.
[49] Garcia, O.; Fancello, E.A.; de Barcellos, C.S.; Duarte, C.A.: hp-Clouds
in Mindlins Thick Plate Model. Internat. J. Numer. Methods Engrg., 47,
1381 1400, 2000.
[39] Dolbow, J.; Belytschko, T.: Numerical Integration of the Galerkin Weak
Form in Meshfree Methods. Comput. Mech., 23, 219 230, 1999.
[50] Gavete, L.; Cuesta, J.L.; Ruiz, A.: A procedure for approximation of the
error in the EFG method. Internat. J. Numer. Methods Engrg., 53, 677
690, 2002.
[40] Duarte, C.A.: A review of some meshless methods to solve partial differential equations. Technical Report 95-06, TICAM, The University of
Texas at Austin, 1995.
[51] Gerstner, T.; Griebel, M.: Numerical integration using sparse grids. Numer. Algorithms, 18, 209 232, 1998.
[41] Duarte, C.A.; Oden, J.T.: An h-p adaptive method using clouds. Comp.
Methods Appl. Mech. Engrg., 139, 237 262, 1996.
[52] Gingold, R.A.; Monaghan, J.J.: Kernel Estimates as a Basis for General
Particle Methods in Hydrodynamics. J. Comput. Phys., 46, 429 453,
1982.
[43] Duarte, C.A.M.; Oden, J.T.: H-p clouds an h-p meshless method. Numer. Methods Partial Differential Equations, 12, 673 705, 1996.
[44] Dyka, C.T.; Ingel, R.P.: An approach for tension instability in smoothed
particle hydrodynamics. Computers & Structures, 57, 573 580, 1995.
[45] Fern
andez-Mendez, S.; Huerta, A.: Coupling finite elements and particles
for adaptivity. In Meshfree Methods for Partial Differential Equations.
(Griebel, M.; Schweitzer, M.A., Eds.), Vol. 26, Springer Verlag, Berlin,
2002.
[56] Griebel, M.; Schweitzer, M.A. (eds.): Meshfree Methods for Partial Differential Equations, Vol. 26. Springer Verlag, Berlin, 2002.
116
REFERENCES
REFERENCES
117
[57] G
unther, F.C.: A Meshfree Formulation for the Numerical Solution of the
Viscous Compressible Navier-Stokes Equations. Dissertation, Northwestern University, Evanston, IL, 1998.
[58] G
unther, F.C.; Liu, W.K.: Implementation of Boundary Conditions for
Meshless Methods. Comp. Methods Appl. Mech. Engrg., 163, 205 230,
1998.
[70] Jiang, B.N.: The least-squares finite element methodtheory and applications in computational fluid dynamics and electromagnetics. Springer
Verlag, Berlin, 1998.
[59] Han, W.; Meng, X.: Error analysis of the reproducing kernel particle
method. Comp. Methods Appl. Mech. Engrg., 190, 6157 6181, 2001.
[71] Jin, X.; Li, G.; Aluru, N.R.: Positivity conditions in meshless collocation
methods. Comp. Methods Appl. Mech. Engrg., 193, 1171 1202, 2004.
[60] Han, W.; Meng, X.: Some studies of the reproducing kernel particle
method. In Meshfree Methods for Partial Differential Equations. (Griebel,
M.; Schweitzer, M.A., Eds.), Vol. 26, Springer Verlag, Berlin, 2002.
[72] Johnson, G.R.; Stryk, R.A.; Beissel, S.R.: SPH for high velocity impact
computations. Comp. Methods Appl. Mech. Engrg., 139, 347 373, 1996.
[61] Hao, S.; Liu, W.K.: Revisit of Moving Particle Finite Element Method.
Proceedings of the Fifth World Congress on Computational Mechanics
(WCCM V), Vienna, 2002.
[73] Kaljevic, I.; Saigal, S.: An improved element free Galerkin formulation.
Internat. J. Numer. Methods Engrg., 40, 2953 2974, 1997.
[62] Hao, S.; Park, H.S.; Liu, W.K.: Moving particle finite element method.
Internat. J. Numer. Methods Engrg., 53, 1937 1958, 2002.
[74] Klaas, O.; Shepard, M.S.: An Octree Based Partition of Unity Method
for Three Dimensional Problems. Fifth U.S. National Congress on Computational Mechanics, Boulder, CO, 1999.
[75] Krongauz, Y.; Belytschko, T.: Enforcement of Essential Boundary Conditions in Meshless Approximations Using Finite Elements. Comp. Methods
Appl. Mech. Engrg., 131, 133 145, 1996.
[67] Huerta, A.; Vidal, Y.; Villon, P.: Pseudo-divergence-free element free
Galerkin method for incompressible fluid flow. Comp. Methods Appl.
Mech. Engrg., 193, 1119 1136, 2004.
[80] Krysl, P.; Belytschko, T.: ESFLIB: A Library to Compute the Element
Free Galerkin Shape Functions. Comp. Methods Appl. Mech. Engrg., 190,
2181 2205, 2001.
118
REFERENCES
[81] Kulasegaram, S.; Bonet, J.; Lok, T.-S.L.; Rodriguez-Paz, M.: Corrected
Smooth Particle Hydrodynamics - A Meshless Method for Computational
Mechanics. ECCOMAS 2000, CIMNE, Barcelona, 11.-14. September
2000.
[82] Lancaster, P.; Salkauskas, K.: Surfaces Generated by Moving Least
Squares Methods. Math. Comput., 37, 141 158, 1981.
[83] Leem, K.H.; Oliveira, S.; Stewart, D.E.: Some Numerical Results from
Meshless Linear Systems. Technical report, Univeristy of Iowa, 2001.
REFERENCES
119
[94] Liu, W.K.; Chen, Y.: Wavelet and Multiple Scale Reproducing Kernel
Methods. Int. J. Numer. Methods Fluids, 21, 901 931, 1995.
[95] Liu, W.K.; Chen, Y.; Uras, R.A.; Chang, C.T.: Generalized Multiple
Scale Reproducing Kernel Particle Methods. Comp. Methods Appl. Mech.
Engrg., 139, 91 157, 1996.
[96] Liu, W.K.; Han, W.; Lu, H.; Li, S.; Cao, J.: Reproducing kernel element
method. Part I: Theoretical formulation. Comp. Methods Appl. Mech.
Engrg., 193, 933 951, 2004.
[84] Levin, D.: The approximation power of moving least-squares. Math. Comput., 67, 1517 1531, 1998.
[97] Liu, W.K.; Jun, S.; Li, S.; Adee, J.; Belytschko, T.: Reproducing Kernel
Particle Methods for Structural Dynamics. Internat. J. Numer. Methods
Engrg., 38, 1655 1679, 1995.
[85] Li, S.; Liu, W.K.: Moving least-squares reproducing kernel method Part
II: Fourier analysis. Comp. Methods Appl. Mech. Engrg., 139, 159 193,
1996.
[98] Liu, W.K.; Jun, S.; Zhang, Y.F.: Reproducing Kernel Particle Methods.
Int. J. Numer. Methods Fluids, 20, 1081 1106, 1995.
[86] Li, S.; Liu, W.K.: Reproducing Kernel Hierarchical Partition of Unity,
Part I Formulation and Theory. Internat. J. Numer. Methods Engrg.,
45, 251 288, 1999.
[99] Liu, W.K.; Li, S.; Belytschko, T.: Moving Least Square Reproducing
Kernel Methods (I) Methodology and Convergence. Comp. Methods Appl.
Mech. Engrg., 143, 113 154, 1997.
[87] Li, S.; Liu, W.K.: Meshfree and particle methods and their applications.
Appl. Mech. Rev., 55, 1 34, 2002.
[100] Liu, W.K.; Uras, R.A.; Chen, Y.: Enrichment of the finite element method
with the reproducing kernel particle method. J. Appl. Mech., ASME, 64,
861 870, 1997.
[88] Li, S.; Lu, H.; Han, W.; Liu, W.K.; Simkins, D.C.: Reproducing kernel
element method. Part II: Globally conforming I m /C n hierarchies. Comp.
Methods Appl. Mech. Engrg., 193, 953 987, 2004.
[101] Lu, H.; Chen, J.S.: Adaptive Galerkin particle method. In Meshfree Methods for Partial Differential Equations. (Griebel, M.; Schweitzer, M.A.,
Eds.), Vol. 26, Springer Verlag, Berlin, 2002.
[102] Lu, H.; Li, S.; Simkins, D.C.; Liu, W.K.; Cao, J.: Reproducing kernel
element method. Part III: Generalized enrichment and application. Comp.
Methods Appl. Mech. Engrg., 193, 989 1011, 2004.
[103] Lu, Y.Y.; Belytschko, T.; Gu, L.: A New Implementation of the Element
Free Galerkin Method. Comp. Methods Appl. Mech. Engrg., 113, 397
414, 1994.
[91] Liu, G.R.: Meshless Methods. CRC Press, Boca Raton, 2002.
[92] Liu, G.R.; Gu, T.: Meshless local Petrov-Galerkin (MLPG) method in
combination with finite element and boundary element approaches. Comput. Mech., 26, 536 546, 2000.
[93] Liu, W.K.; Belytschko, T.; Oden, J.T. (eds.): Meshless methods. Comp.
Methods Appl. Mech. Engrg., 139(1-4, special issue), 1 400, 1996.
[104] Lucy, L.B.: A numerical approach to the testing of the fission thesis.
Astronom. J., 82(12), 1013 1024, 1977.
[105] Luo, Y.; H
aussler-Combe, U.: An adaptivity procedure based on the gradient of strain energy density and its application in meshless methods.
In Meshfree Methods for Partial Differential Equations. (Griebel, M.;
Schweitzer, M.A., Eds.), Vol. 26, Springer Verlag, Berlin, 2002.
120
REFERENCES
[106] Melenk, J.M.; Babuska, I.: The Partition of Unity Finite Element Method:
Basic Theory and Applications. Comp. Methods Appl. Mech. Engrg., 139,
289 314, 1996.
[107] Monaghan, J.J.: Why particle methods work. SIAM J. Sci. Comput., 3,
422 433, 1982.
[108] Monaghan, J.J.: An introduction to SPH. Comput. Phys. Comm., 48, 89
96, 1988.
[109] Mukherjee, Y.X.; Mukherjee, S.: On Boundary Conditions in the Elementfree Galerkin Method. Comput. Mech., 19, 264 270, 1997.
REFERENCES
121
[117] Park, S.H.; Youn, S.K.: The least-squares meshfree method. Internat. J.
Numer. Methods Engrg., 52, 997 1012, 2001.
[118] Rabczuk, T.; Belytschko, T.; Xiao, S.P.: Stable particle methods based
on Lagrangian kernels. Comp. Methods Appl. Mech. Engrg., 193, 1035
1063, 2004.
[119] Randles, P.W.; Libersky, L.D.: Smoothed Particle Hydrodynamics: Some
recent improvements and applications. Comp. Methods Appl. Mech. Engrg., 139, 375 408, 1996.
[120] Sibson, R.: A vector identity for the Dirichlet tesselation. Math Proc
Cambridge Philos Soc, 87, 151 155, 1980.
[110] Muravin, B.; Turkel, E.: Advance diffraction method as a tool for solution
of complex non-convex boundary problems. In Meshfree Methods for Partial Differential Equations. (Griebel, M.; Schweitzer, M.A., Eds.), Vol. 26,
Springer Verlag, Berlin, 2002.
[121] Simkins, D.C.; Li, S.; Lu, H.; Liu, W.K: Reproducing kernel element
method. Part IV: Globally compatible C n (n 1) triangular hierarchy.
Comp. Methods Appl. Mech. Engrg., 193, 1013 1034, 2004.
[111] Nayroles, B.; Touzot, G.; Villon, P.: Generalizing the Finite Element
Method: Diffuse Approximation and Diffuse Elements. Comput. Mech.,
10, 307 318, 1992.
[122] Sladek, V.; Sladek, J.; Atluri, S.N.; Keer, R. Van: Numerical Integration
of Singularities in Meshless Implementation of Local Boundary Integral
Equations. Comput. Mech., 25, 394 403, 2000.
[112] Noguchi, H.; Kawashima, T.; Miyamura, T.: Element free analyses of shell
and spatial structures. Internat. J. Numer. Methods Engrg., 47, 1215
1240, 2000.
[113] Oden, J.T.; Duarte, C.A.; Zienkiewicz, O.C.: A New Cloud-based hp
Finite Element Method. Comp. Methods Appl. Mech. Engrg., 153, 117
126, 1998.
[114] O
nate, E.; Idelsohn, S.; Zienkiewicz, O.C.; Taylor, R.L.: A Finite Point
Method in Computational Mechanics. Applications to Convective Transport and Fluid Flow. Internat. J. Numer. Methods Engrg., 39, 3839
3866, 1996.
[123] Strouboulis, T.; Babuska, I.; Copps, K.: The design and analysis of the
Generalized Finite Element Method. Comp. Methods Appl. Mech. Engrg.,
181, 43 69, 2000.
[124] Strouboulis, T.; Copps, K.; Babuska, I.: The Generalized Finite Element
Method. Comp. Methods Appl. Mech. Engrg., 190, 4081 4193, 2001.
[125] Sukumar, N.; Moran, B.; Belytschko, T.: The natural element method in
solid mechanics. Internat. J. Numer. Methods Engrg., 43(5), 839 887,
1998.
[126] Sukumar, N.; Moran, B.; Semenov, A.Y.; Belikov, V.V.: Natural neighbour Galerkin methods. Internat. J. Numer. Methods Engrg., 50, 1 27,
2001.
[115] O
nate, E.; Idelsohn, S.; Zienkiewicz, O.C.; Taylor, R.L.; Sacco, C.: A
Stabilized Finite Point Method for Analysis of Fluid Mechanics Problems.
Comp. Methods Appl. Mech. Engrg., 139, 315 346, 1996.
[116] Organ, D.; Fleming, M.; Terry, T.; Belytschko, T.: Continous meshless approximations for nonconvex bodies by diffraction and transparency. Comput. Mech., 18, 225 235, 1996.
122
REFERENCES
[129] Wagner, G.J.; Liu, W.K.: Hierarchical enrichment for bridging scales and
mesh-free boundary conditions. Internat. J. Numer. Methods Engrg., 50,
507 524, 2001.
[130] Wu, C.K.C.; Plesha, M.E.: Essential boundary condition enforcement
in meshless methods: Boundary flux collocation method. Internat. J.
Numer. Methods Engrg., 53, 499 514, 2002.
[131] Zhang, X.; Liu, X.H.; Song, K.Z.; Lu, M.W.: Least-squares collocation
meshless method. Internat. J. Numer. Methods Engrg., 51, 1089 1100,
2001.
[132] Zhu, T.; Zhang, J.; Atluri, S.N.: A Meshless Local Boundary Integral Equation (LBIE) Method for Solving Nonlinear Problems. Comput.
Mech., 22, 174 186, 1998.
[133] Zhu, T.; Zhang, J.-D.; Atluri, S.N.: A Local Boundary Integral Equation
(LBIE) Method in Computational Mechanics, and a Meshless Discretization Approach. Comput. Mech., 21, 223 235, 1998.
J. Schonwalder, M. Bolz,
S. Mertens, J. Quittek, A. Kind,
J. Nicklisch
1998-08
C. Heimann, S. Lauterbach,
T. Forster
1999-01
A. Zeller
1999-02
P. Niebert
1999-03
S. Eckstein, K. Neumann
1999-04
T. Gehrke, A. Rensink
2000-01
T. Kaiser, B. Fischer,
W. Struckmann
2000-02
J. Saperia, J. Schonwalder
2000-03
A. Casties
2000-04
J. Koslowski
2000-05
S. Eckstein, P. Ahlbrecht,
K. Neumann
2000-06
F. Strau, J. Schonwalder,
M. Mertens
2000-07
F. Strau
2000-08
T. Gehrke, U. Goltz
2000-09
T. Firley
2001-01
K. Diethers
2002-01
2002-02
J. Weimar
2002-03
H. G. Matthies, M. Meyer
2002-04
H. G. Matthies, J. Steindorf
2002-05
H. G. Matthies, J. Steindorf
2002-06
H. G. Matthies, J. Steindorf
2002-07
H. Firley, U. Goltz
2003-01
M. Meyer, H. G. Matthies
2003-02
C. Taubner
2003-03