0% found this document useful (0 votes)
32 views13 pages

Engineering Analysis With Boundary Elements: Kalani Rubasinghe, Guangming Yao, Jing Niu, Gantumur Tsogtgerel

Uploaded by

gyao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views13 pages

Engineering Analysis With Boundary Elements: Kalani Rubasinghe, Guangming Yao, Jing Niu, Gantumur Tsogtgerel

Uploaded by

gyao
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

i An update to this article is included at the end

Engineering Analysis with Boundary Elements 156 (2023) 240–250

Contents lists available at ScienceDirect

Engineering Analysis with Boundary Elements


journal homepage: www.elsevier.com/locate/enganabound

Polyharmonic splines interpolation on scattered data in 2D and 3D with


applications
Kalani Rubasinghe a,d , Guangming Yao a ,∗, Jing Niu b , Gantumur Tsogtgerel c
a
Department of Mathematics, Clarkson University, Potsdam, NY 13699-5815, USA
b
Department of Mathematics, Harbin Normal University, Harbin, China
c
Department of Mathematics and Statistics, McGill University, Montréal, Québéc, H3A 2K6, Canada
d
Department of Mathematics, State University of New York at Canton, Canton, NY 13617, USA

ARTICLE INFO ABSTRACT

Keywords: Data interpolation is a fundamental problem in many applied mathematics and scientific computing fields.
Radial basis functions This paper introduces a modified implicit local radial basis function interpolation method for scattered data
Interpolation using polyharmonic splines (PS) with a low degree of polynomial basis. This is an improvement to the original
Scattered data
method proposed in 2015 by Yao et al.. In the original approach, only radial basis functions (RBFs) with
Parallel computing
shape parameters, such as multiquadrics (MQ), inverse multiquadrics (IMQ), Gaussian, and Matern RBF are
used. The authors claimed that the conditionally positive definite RBFs such as polyharmonic splines 𝑟2𝑛 ln 𝑟
and 𝑟2𝑛+1 ‘‘failed to produce acceptable results’’.
In this paper, we verified that when polyharmonic splines together with a polynomial basis is used on the
interpolation scheme, high-order accuracy and excellent conditioning of the global sparse systems are gained
without selecting a shape parameter. The scheme predicts functions’ values at a set of discrete evaluation
points, through a global sparse linear system. Compared to standard implementation, computational efficiency
is achieved by using parallel computing. Applications of the proposed algorithms to 2D and 3D benchmark
functions on uniformly distributed random points, the Halton quasi-points on regular or Stanford bunny shape
domains, and an image interpolation problem confirmed the effectiveness of the method. We also compared
the algorithms with other methods available in the literature to show the superiority of using PS augmented
with a polynomial basis. High accuracy can be easily achieved by increasing the order of polyharmonic splines
or the number of points in local domains, when small order of polynomials are used in the basis. MATLAB
code for the 3D bunny example is shared on MATLAB Central File Exchange (Yao, 2023).

1. Introduction suffer from typical drawbacks of a global method such as high memory
requirement and high computational cost. In fact, global RBF methods
Scattered data interpolation is a common and fundamental problem produce a linear system of size large as the number of data points.
in many scientific and engineering studies [1–7]. There has already Generally, the costs of direct solution of such systems are (𝑁 3 ) and
been so much work [8] in this area yet interpolation is still a difficult
(𝑁 2 ) memory usage. There are several methods that get around
and computationally expensive problem. Polynomial interpolation and
these issues such as domain decomposition [14,15], accelerated iter-
piece-wise polynomial splines have been generally used until 1971, Rol-
land Hardy introduced an interpolation method based on multiquadric ated approximate moving least squares [16], RBF-QR algorithms [17],
(MQ) radial basis function (RBF) [9]. Since then, many different RBF a compactly supported RBFs [18], radial basis function-finite differ-
interpolation schemes have been developed. A few different methods ence method [19], and many others. One disadvantage of the domain
can be found in [10–12], and a comparison of radial basis functions decomposition methods is that the domain discretization and joining
methods can be found in [13]. phase of the local interpolants are not easy to implement. Recently,
Despite the simplicity, applicability to various kinds of problems, Hansen shared a toolbox ‘‘regtools’’ including regtoolsTSVD, Tikhonov,
and effectiveness of RBF-based methods, the resultant system of equa- and LSQR regularization methods for analysis and solution of discrete
tions is often ill-conditioned when there is a large number of data
ill-posed problems [20].
points. Apart from that, global interpolation methods based on RBFs

∗ Corresponding author.
E-mail addresses: [email protected] (G. Yao), [email protected] (J. Niu).

https://doi.org/10.1016/j.enganabound.2023.08.001
Received 1 March 2023; Received in revised form 1 August 2023; Accepted 2 August 2023
Available online 16 August 2023
0955-7997/© 2023 Elsevier Ltd. All rights reserved.
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

The two methods proposed in [21] are fast localized RBF algorithms Table 1
for large-scale 2D scattered data interpolation. In these methods, an The dimension of the basis {𝑝1 , 𝑝2 , … , 𝑝𝑞 } where 𝑝𝑖 , 𝑖 = 1, … , 𝑞 are polynomials of degree
up to 𝑚 − 1 in 𝑑 dimension.
interpolation is performed on each local influence domain and then
𝑚−1 𝑑=1 𝑑=2 𝑑=3
all the influence domains are combined into a sparse matrix with the
use of RBFs like Gaussian, MQ, Normalized MQ, and Matern. These 0 1 1 1
1 2 3 4
methods are categorized as implicit methods in such a way that the
2 3 6 10
interpolant is not explicitly defined by the numerical approximation, 3 4 10 20
instead the approximation at sets of discrete evaluation points is given. 4 5 15 35
We will continue to use such characterization throughout our work. On 5 6 21 56
the other hand, if the interpolation function is defined by the numerical 6 7 28 84

schemes, we call it an explicit method.


The authors in [21] claimed that the conditionally positive definite
RBFs such as polyharmonic splines 𝑟2𝑛 ln 𝑟 and 𝑟2𝑛+1 ‘‘failed to produce
where 𝛼 = [𝛼1 , 𝛼2 , … , 𝛼𝑁 ]𝑇 , 𝑏 = [𝐲1 , 𝐲2 , … , 𝐲𝑁 ]𝑇 , and entries of 𝐴
acceptable results’’. However, in this paper, we verified that when
are given by 𝑎𝑖,𝑗 = 𝜙(‖𝐱𝑖 − 𝐱𝑗 ‖), 1 ≤ 𝑖, 𝑗 ≤ 𝑁. The solution of the
polyharmonic splines together with a polynomial basis is used on
above interpolation problem exists and is unique, if and only if the
the implicit interpolation scheme, high-order accuracy, and excellent
interpolation matrix 𝐴 is nonsingular, which is true for certain choices
conditioning of the global sparse systems are gained without the need
of RBFs that are positive definite.
for selecting a shape parameter. The performance of the proposed
algorithm is even more accurate than the most recent publication [7]
Theorem 2.1 ([12]). A real-valued continuous function 𝜙 is positive
in 2023. Although [7] is a global interpolation scheme, our algorithms
definite on R𝑑 if and only if it is even and
are local.
One may argue the proposed algorithm is the same as the idea of ∑
𝑁 ∑
𝑁

the Radial Basis Function-Finite Difference Method (RBF-FD) [19] when 𝛼𝑖 𝛼𝑗 𝜙(𝐱𝑖 − 𝐱𝑗 ) ≥ 0, (3)
𝑖=1 𝑗=1
applied to interpolation problems. However, there are fundamental
differences between the two methods: (1) Our method is an implicit for any 𝑁 distinct data points 𝐱1 , … , 𝐱𝑁 ∈ R𝑑 and 𝛼 = (𝛼1 , … , 𝛼𝑁 )𝑇 ∈ R𝑁 .
method, where only approximations at a set of discrete evaluation
points are produced, but RBF-FD is an explicit interpolation in which Radially symmetric RBFs are clearly even functions. If 𝜙(𝑟) is a
an interpolation function is given by the numerical scheme; (2) Our positive definite function, it can be proved that the interpolation matrix
method constructs the local domains by searching within the interpo- 𝐴 is a positive definite matrix for any distinct points 𝐱1 , … , 𝐱𝑁 making
lation points or union of interpolation and evaluation points, but the it nonsingular. For example, Gaussian, inverse multiquadrics (IMQ),
RBF-FD constructs local domains purely within the evaluation points. Matern, and compactly-supported RBFs (CS-RBFs) are positive definite
In Section 2, we briefly introduced global RBF interpolation and the functions [22,23]. Other commonly used RBFs such as multiquadrics
positive-definiteness of the RBFs. In Section 3, we propose the localized (MQ) and polyharmonic splines (PS) on the other hand have only been
RBF interpolation methods for scattered data interpolation in R𝑑 , where shown to be conditionally positive definite.
𝑑 is a positive integer. We categorize this algorithm as a Local Implicit
Interpolation using Polyharmonic Splines and Polynomials (LI2Poly2) Definition 2.2 ([22]). A real-valued continuous even function 𝜙 is
Algorithm 1 and Algorithm 2 based on two different ways of creating called conditionally positive definite of order 𝑚 on R𝑑 if
local domains. Section 4 dedicates to numerical experiments carried 𝑁 ∑
∑ 𝑁
on with 2D scattered data followed by a few 3D experiments. Nu- 𝛼𝑖 𝛼𝑗 𝜙(𝐱𝑖 − 𝐱𝑗 ) ≥ 0, (4)
merical results are compared with the implicit local RBF method [21] 𝑖=1 𝑗=1
and the CS-RBF method. The performance of the proposed method is
for any 𝑁 distinct data points 𝐱1 , … , 𝐱𝑁 ∈ R𝑑 and 𝛼 = (𝛼1 , … , 𝛼𝑁 )𝑇 ∈
demonstrated regarding accuracy, efficiency, and parameter selection.
R𝑁 satisfying
In Section 5, we draw some conclusions on the usage of polyharmonic
spline basis in the proposed method and possible improvements. ∑
𝑁
𝛼𝑗 𝑝(𝐱𝑗 ) = 0 (5)
𝑗=1
2. Radial basis function interpolation
for any real valued polynomial 𝑝 of degree at most 𝑚 − 1. The function
The problem of scattered data interpolation is, given a set of distinct 𝜙 is called strictly conditionally positive definite of order 𝑚 on R𝑑 if the
data points 𝐱𝑖 ∈ R𝑑 with associated data values 𝐲𝑖 ∈ R for 𝑖 = 1, 2, … , 𝑁, points 𝐱1 , … , 𝐱𝑁 ∈ R𝑑 are distinct, and 𝛼 ≠ 𝟎 implies strict inequality
find an interpolant function 𝑓̂(𝐱) ∶ R𝑑 → R, satisfying 𝑓̂(𝐱𝑖 ) = 𝐲𝑖 , 𝑖 = in (4).
1, … , 𝑁.
The global radial basis function interpolant 𝑓̂ is given by When using radial basis functions that are conditionally positive
definite, one has to add polynomial basis functions of a certain maximal

𝑁
degree to the interpolate function (1) as follows
𝑓̂(𝐱) = 𝛼𝑗 𝜙(‖𝐱 − 𝐱𝑗 ‖), (1)
𝑗=1 ∑
𝑁 ∑
𝑞
𝑓̂(𝐱) = 𝛼𝑗 𝜙(‖𝐱 − 𝐱𝑗 ‖) + 𝛽𝑘 𝑝𝑘 (𝐱) (6)
where 𝜙(𝑟) is a radial basis function with the 𝑟 = ‖𝐱 − 𝐱𝑗 ‖ ≥ 0 defined
𝑗=1 𝑘=1
as the Euclidean distance, and 𝐱𝑗 is the center of the RBF. Note that 𝑟
𝑑 , space of polynomials of
where {𝑝1 , 𝑝2 , … , 𝑝𝑞 } forms a basis for 𝑚−1
is the distance between point 𝐱 and the centers of the basis functions. ( )
The unknown real coefficients 𝛼𝑗 , 𝑗 = 1, 2, … , 𝑁 are determined by total degree less than or equal to 𝑚−1 in 𝑑-variables, where 𝑞 = 𝑑+𝑚−1
𝑑
.
enforcing the interpolation condition 𝑓̂(𝐱𝑖 ) = 𝐲𝑖 ∶ See Table 1 for some examples of the values of 𝑞 in 𝑑 dimension.
The new linear system is created by enforcing the interpolation

𝑁
𝐲𝑖 = 𝑓̂(𝐱𝑖 ) = 𝛼𝑗 𝜙(‖𝐱𝑖 − 𝐱𝑗 ‖), 𝑖 = 1, 2, … , 𝑁. (2) condition 𝑓̂(𝐱𝑖 ) = 𝐲𝑖 for 𝑖 = 1, … , 𝑁. We also consider the following
𝑗=1 additional insolvency constraints for the polynomial part to guarantee
The resulting 𝑁 × 𝑁 linear system of equations can be represented by a unique solution for the new linear system:
a matrix form ∑
𝑁
𝛼𝑗 𝑝𝑘 (𝐱𝑗 ) = 0, 𝑘 = 1, … , 𝑞. (7)
𝐴𝛼 = 𝑏, 𝑗=1

241
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Thus, the following linear system can be achieved by combining the


interpolation condition and the additional insolvency constraints (7):
( )( ) ( )
𝐴 𝑃𝑇 𝛼 𝑏
= , (8)
𝑃 𝟎 𝛽 𝟎

where 𝛼 = [𝛼1 , 𝛼2 , … , 𝛼𝑁 ]𝑇 , 𝛽 = [𝛽1 , 𝛽2 , … , 𝛽𝑞 ]𝑇 , 𝑏 = [𝐲1 , 𝐲2 , … , 𝐲𝑁 ]𝑇 ,


and entries of 𝐴 are given by 𝑎𝑖,𝑗 = 𝜙(‖𝐱𝑖 − 𝐱𝑗 ‖), 1 ≤ 𝑖, 𝑗 ≤ 𝑁 and
entries of 𝑃 are given by 𝑝𝑖,𝑗 = 𝑝𝑗 (𝐱𝑖 ), 1 ≤ 𝑖 ≤ 𝑁, 1 ≤ 𝑗 ≤ 𝑞.
Once the unknown coefficients are obtained by solving the resulting
(𝑁 + 𝑞) × (𝑁 + 𝑞) linear system, the interpolations can be made.
Global interpolation methods based on RBF are easy to implement but,
when the number of collocation points within the domain is high, the
resulting matrix suffers from ill-conditioning and problems associated
with computational cost including storage and time. On the other hand,
adding polynomials in a global context can lead to other drawbacks Fig. 1. Local influence domain of 𝐱𝑖 with five nearest evaluation points in Algorithm 1.
associated with polynomial interpolation. With the limitations of global
methods, formulations of localized methods offer an alternative for
large-scale realistic data. In the next section, we present two local
contains only 𝑛 evaluation points. Fig. 1 shows an example of local
methods which have similar localization procedures as the generalized
domain construction with 𝑛 = 5.
finite difference method.
Now let us focus on the RBF interpolation on the local domain 𝛺𝑖 ,
and let 𝐳 = 𝐳𝑗[𝑖] ∈ 𝛺𝑖 for some 𝑗 ≤ 𝑛. Thus, 𝑓̂(𝐳) can be written as
3. Localized RBF interpolation based on polyharmonic splines
previously shown in (6) which is the following,

The two methods proposed in [21] are fast localized RBF algorithms ∑
𝑛 ∑
𝑞
𝑓̂(𝐳) = 𝛼𝑗 𝜙(‖𝐳 − 𝐳𝑗[𝑖] ‖) + 𝛼𝑛+𝑙 𝑝𝑙 (𝐳).
for large-scale 2D scattered data interpolation. In these methods, an 𝑗=1 𝑙=1
interpolation is performed on each local influence domain and then
where 𝜙 is chosen to be polyharmonic splines RBF and {𝑝1 , 𝑝2 , … , 𝑝𝑞 }
all the influence domains are combined into a sparse matrix with
forms a basis( for polynomials
) up to degree less than or equal to 𝑚 − 1
the use of RBFs like Gaussian, MQ, Normalized MQ, and Matern. All
in R𝑑 , 𝑞 = 𝑑+𝑚−1 . Notice that 𝑚 = 𝑘 − ⌈𝑑∕2⌉ + 1 is the order of
of these RBFs come with a free shape parameter that needs to be 𝑑
the conditionally positive definite polyharmonic splines 𝜙𝑑,𝑘 [33]. In
chosen carefully in the interpolation process. Yet, there is no practical
addition, the size of the local domain should
( )be greater than the number
and theoretical procedure for choosing the optimal shape parameter
of polynomial basis functions 𝑞 = 𝑑+𝑚−1 for this to work. In this
in applications except for some efforts made [24–27]. It is true that 𝑑 ( )
example, the local domain size needs to be larger than 𝑞 = 2+2 = 6 in
smooth RBFs like MQ give more accurate results at smaller values 2
of the shape parameter 𝑐. But there is a trade-off between accuracy R2 .
Collocation on the local domain of influence leads to the following
and conditioning. As 𝑐 decreases function becomes flatter and accuracy
system:
increases until numerical ill-conditioning steps in. There are ways to
improve the performance of MQ RBF methods, such as employing ∑
𝑛 ∑
𝑞
𝛼𝑗 𝜙(‖𝐳𝑘[𝑖] − 𝐳𝑗[𝑖] ‖) + 𝛼𝑛+𝑙 𝑝𝑙 (𝐳𝑘[𝑖] ) = 𝑓̂(𝐳𝑘[𝑖] ), 𝑘 = 1, 2, … 𝑛, (9)
fictitious nodes [28,29], pre-conditioning [15], etc [30–32].
𝑗=1 𝑙=1
We have improved two local methods in [21] by incorporating

𝑛
shape parameter-free polyharmonic splines (PS RBF) together with 𝛼𝑗 𝑝𝑙 (𝐳𝑘[𝑖] ) = 0, 𝑙 = 1, 2, … 𝑞. (10)
polynomial basis functions. This is an extension to the current methods, 𝑗=1
achieving high accuracy without the need of selecting a shape param- Let the coefficient matrices on the first and second terms of the
eter. It is known that when high-order polynomials are used in global left-hand side of (9) be 𝜱 and 𝑷 respectively. That is
methods can lead to Runge’s phenomenon. Such effects are alleviated
simply in the local context as one is only interested in interpolation ⎛ 𝜙(‖𝐳[𝑖] − 𝐳[𝑖] ‖) … 𝜙(‖𝐳1[𝑖] − 𝐳𝑛[𝑖] ‖) ⎞
1 1
⎜ ⎟
𝜙(‖𝐳2[𝑖] − 𝐳1[𝑖] ‖) … 𝜙(‖𝐳2[𝑖] − 𝐳𝑛[𝑖] ‖)
within a small neighborhood. 𝜱=⎜ ⎟ , and
For convenience, let us consider the interpolation problem in two- ⎜ ⋮ ⋮ ⋮ ⎟
⎜ [𝑖] [𝑖] ⎟
dimensional spaces. Suppose we have a set of distinct scattered data ⎝ 𝜙(‖𝐳𝑛 − 𝐳1 ‖) … 𝜙(‖𝐳𝑛[𝑖] − 𝐳𝑛[𝑖] ‖) ⎠
points {𝐱𝑖 }𝑁𝑖=1
⊂ R2 and their function values {𝑓 (𝐱𝑖 )}𝑁 𝑖=1
⊂ R. Let ⎛ 1 𝑥1 𝑦1 … 𝑦𝑚−1 ⎞
𝑁𝑡 1
2
{𝐳𝑗 }𝑗=1 ⊂ R be a set of evaluation points. We try to find the interpolant 𝐏=⎜ ⋮ ⋮ ⋮ ⋮ ⋮ ⎟. (11)
⎜ ⎟
𝑓̂ such that 𝑓̂(𝐳𝑗 ) ≈ 𝑓 (𝐳𝑗 ), 𝑗 = 1, … , 𝑁𝑡 and 𝑓̂(𝐱𝑖 ) = 𝑓 (𝐱𝑖 ), 𝑖 = 1, … , 𝑁. ⎝ 1 𝑥𝑛 𝑦𝑛 … 𝑦𝑚−1
𝑛 ⎠
The polyharmonic splines in R𝑑 are defined as follows: Then we can introduce a block matrix from for the system (9)–(10) as
{
𝑟2𝑘−𝑑 ln(𝑟), for 𝑑 even follows:
𝜙𝑑,𝑘 (𝑟) = ( ) ( )
𝑟2𝑘−𝑑 , for 𝑑 odd 𝜱 𝑷 𝐟̂𝑛
𝑇 𝜶 [𝒊] = , (12)
𝑷 𝟎 𝟎
where 2𝑘 > 𝑑, and with 𝑘 ∈ N. For example, in R2 , 𝜙2,3 (𝑟) = 𝑟4 log(𝑟),
then 𝑑 = 2, 𝑘 = 3, thus polynomials up to degree 2 or higher need to be where 𝜶 [𝒊] = [𝛼1 , 𝛼2 , … , 𝛼𝑛+𝑞 ]𝑇 , 𝐟̂𝑛 = [𝑓̂(𝐳1[𝑖] ), 𝑓̂(𝐳2[𝑖] ), … , 𝑓̂(𝐳𝑛[𝑖] )]𝑇 . Let
added to ensure the unique solvability. the coefficient matrix in (12) be 𝜳 . Then the unknown coefficients in
(9)–(10) can be expressed as
( )
3.1. LI2Poly2-Algorithm 1 𝐟̂𝑛
𝜶 [𝒊] = 𝜳 −1 . (13)
𝟎
For each 𝐱𝑖 , we choose 𝑛 nearest evaluation points to 𝐱𝑖 to create
Therefore, for 𝑖 = 1, … , 𝑁,
a local influence domain 𝛺𝑖 = {𝐳𝑗[𝑖] }𝑛𝑗=1 , in which 𝑗 = 1 … 𝑛 denotes
for local indexing for each node in the 𝛺𝑖 . Note that 𝑛 ≪ 𝑁. In this ∑
𝑛 ∑
𝑞
𝑓 (𝐱𝐢 ) = 𝛼𝑗 𝜙(‖𝐱𝑖 − 𝐳𝑗[𝑖] ‖) + 𝛼𝑛+𝑙 𝑝𝑙 (𝐱𝑖 ),
construction, each interpolation point has a local influence domain that 𝑗=1 𝑙=1

242
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

for the changes in Algorithm 2. The resulting system of linear equations


is size 𝑁 × (𝑁𝑡 + 𝑁) and is underdetermined as follows,
( )
𝐟𝑁̂ 𝑡
𝐟(𝐱𝐢 ) = 𝜦𝑵 𝒕 +𝑵 (𝐱𝑖 ) , 𝑖 = 1, … , 𝑁 (16)
𝐟𝐍

where 𝐟𝐍 = [𝑓 (𝐱1 ), … , 𝑓 (𝐱𝑁 )]𝑇 . In order to solve this system, the system
has been reformulated to a 2𝑁 × (𝑁𝑡 + 𝑁) system as follows
( )( ) ( )
𝜦𝑵 𝒕 +𝑵 𝐟𝑁̂ 𝑡 𝐟𝐍
= . (17)
𝟎𝑁×𝑁𝑡 𝐈𝑁×𝑁 𝐟𝐍 𝐟𝐍

The approximated function values 𝐟𝑁̂ 𝑡 can be obtained by solving (17)


using any system-solving technique.
The efficiency of the method can be illustrated by considering its
asymptotic computational complexity. For both algorithms, we need to

Fig. 2. Local influence domain consists of seven nearest points to 𝑥𝑖 in Algorithm 2. Step 1. For each interpolation point, we need to calculate the
coefficient matrix in (14) by

◦ first, build kd-tree among the all 𝑁𝑡 evaluation points for


= 𝜱(𝐱𝑖 )𝜶 [𝒊] Algorithm 1 or for all 𝑁𝑡 evaluation points and 𝑁 interpo-
( )
𝐟̂𝑛 lation points for Algorithm 2;
= 𝜱(𝐱𝑖 )𝜳 −1
𝟎 ◦ second, find 𝑛 nearest neighbors of all 𝑁 interpolation
( ) points using the kd-tree created above;
̂
𝐟𝑛
= 𝜦𝑛+𝑞 (𝐱𝑖 ) ◦ third, solve small local systems of size (𝑛 + 𝑞) × (𝑛 + 𝑞), where
𝟎
there are 𝑁 such systems;
= 𝜦𝑛 (𝐱𝑖 )𝐟̂𝑛 , (14)
Step 2. Solve a sparse system of size 𝑁 × 𝑁, in which 𝑛 unknowns
where in each equation.
[
𝜱(𝐱𝑖 ) = 𝜙(‖𝐱𝑖 − 𝐳1[𝑖] ‖), … , 𝜙(‖𝐱𝑖 − 𝐳𝑛[𝑖] ‖), 1, 𝑥, 𝑦, 𝑥2 , 𝑥𝑦, 𝑦2 , … , Fig. 3 shows detailed computational complexity associated with
] both algorithms. Note that there are 𝑁𝑡 evaluation points, 𝑁 interpo-
𝑥𝑚 , 𝑥𝑚−1 𝑦, … , 𝑥𝑦𝑚−1 , 𝑦𝑚 ,
lation points, and 𝑞 is the number of polynomial basis. Thus, there are
and 𝜦𝑛+𝑞 (𝐱𝑖 ) = 𝜱(𝐱𝑖 )𝜳 −1 . Note that 𝜦𝑛 (𝐱𝑖 ) is obtained by omitting last 𝑁 small systems of size (𝑛 + 𝑞) × (𝑛 + 𝑞) in Algorithm 1. The cost for
𝑞 elements of the vector 𝜦𝑛+𝑞 (𝐱𝑖 ). solving these 𝑁 small systems is (𝑁(𝑛 + 𝑞)3 ). However, 𝑛, 𝑞 ≪ 𝑁 in
The system (14) can be reformulated to an 𝑁 × 𝑁𝑡 sparse system practice. Hence, the computational complexity in time for Algorithm 1
easily by extending local 𝐟̂𝑛 to a global 𝐟𝑁̂ 𝑡 = [𝑓̂(𝐳1 ), 𝑓̂(𝐳2 ), … , 𝑓̂(𝐳𝑁𝑡 )]𝑇 . is ((𝑁𝑡 + 𝑛𝑁)𝑙𝑜𝑔(𝑁𝑡 )) + (𝑁(𝑛 + 𝑞)3 ) + (𝑁𝑛2 ) and the computational
This can be done by inserting zeros to 𝜦𝒏 (𝐱𝒊 ) based on the mapping complexity of Algorithm 2 is (((𝑛 + 1)𝑁 + 𝑁𝑡 )𝑙𝑜𝑔(𝑁 + 𝑁𝑡 )) + (𝑁(𝑛 +
between 𝐟̂𝑛 and 𝐟𝑁̂ 𝑡 . It follows that 𝑞)3 ) + ((𝑁 + 𝑁𝑡 )𝑛2 ).
In the next section, we demonstrate the accuracy and efficiency
𝐟(𝐱𝐢 ) = 𝜦𝑵 𝒕 (𝐱𝒊 )𝐟̂𝑁𝑡 (15) of the proposed method using two-dimensional data intensively and
where 𝜦𝑵 𝒕 (𝐱𝒊 ) is a vector that is obtained by adding zeros into 𝜦𝑛 (𝐱𝑖 ) some results from the three-dimensional data as well. The stability of
appropriate at places. the method is also inspected by studying the condition number of the
interpolation matrices and global sparse matrices.
For example, assume 𝑁𝑡 = 50, 𝑛 = 3, and 𝛺𝑖 = {𝐳1[𝑖] , 𝐳2[𝑖] , 𝐳3[𝑖] } =
{𝐳12 , 𝐳20 , 𝐳23 }. Then we need to insert 47 zeros into 𝜦𝒏 (𝐱𝒊 ), while only
keeping non zero values at 12th, 20th and 23rd positions. i.e, 𝜦𝑵 𝒕 (𝐱𝒊 ) = 4. Numerical results
[0, 0, … , # , 0, 0, … , 0, # , 0, 0, # , 0, 0, … , 0 ]. By solving
⏟⏟⏟ ⏟⏟⏟ ⏟⏟⏟ ⏟⏟⏟ To illustrate the effectiveness of the method on scattered data, we
12𝑡ℎ 20𝑡ℎ 23𝑟𝑑 50𝑡ℎ
(15), the unknown approximation function values at 𝑁𝑡 evaluation consider examples of two and three dimensions on both regular and
points, 𝑓̂(𝐳𝑗 ), 𝑗 = 1, 2, … , 𝑁𝑡 can be found. This can be done using any irregular domains. Recall the following key parameters/notations:
direct or iterative methods for solving systems of linear equations given
𝑁 : the number of interpolation points
that 𝑁𝑡 ≤ 𝑁. The proposed algorithm works for any dimension 𝑑 like
𝑁𝑡 : number of evaluation (test) points
many other RBF-based methods.
𝑛 : the number of points in the local domain of influence
𝑚 : the degree of highest-order polynomials
3.2. LI2Poly2-Algorithm 2 𝑘 : the order of PS.

Numerical results are compared with the exact function values and
Algorithm 2 is developed with the idea of taking all the interpo-
the accuracy of the method is measured in terms of root mean squared
lation points and evaluation points in each local neighborhood into
error (𝜖𝑟𝑚𝑠 ) and the maximum absolute error (𝜖𝑚𝑎𝑥 ), whose are given by
consideration for the interpolation at a single point. In this construc-

tion, each interpolation point will have a local influence domain that √
√1 ∑ 𝑁
𝜖𝑟𝑚𝑠 = √
𝑁
contains both evaluation points and interpolation points, total into 𝑛. (𝑓̂ − 𝑓𝑖 )2 , 𝜖𝑚𝑎𝑥 = max|𝑓̂𝑖 − 𝑓𝑖 | (18)
𝑁 𝑖=1 𝑖 𝑖=1
Fig. 2 shows an example of local domain construction with 𝑛 = 7.
For each 𝐱𝑖 , we choose the nearest 𝑛1 evaluation points and 𝑛2 where 𝑓̂𝑖 = 𝑓̂(𝐱𝑖 ) is the approximated value of 𝑓𝑖 = 𝑓 (𝐱𝑖 ). The condition
interpolation points which adds up to 𝑛 to create a local influence number of the interpolation matrix 𝐴 is defined as
𝑛1 ⋃ [𝑖] 𝑛2
domain 𝛺𝑖 = {𝐳𝑗[𝑖] }𝑗=1 {𝐱𝑗 }𝑗=1 . The local interpolation procedure 𝜎
presented in Algorithm 1 resulted in (15) should be changed to account cond(𝐴) = ‖𝐴‖2 ‖𝐴−1 ‖2 = 𝑚𝑎𝑥 (19)
𝜎𝑚𝑖𝑛

243
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 3. Computational complexity in time with regards LI2Poly2 Algorithm 1 and Algorithm 2.

Fig. 4. Franke’s benchmark test function 𝐹1 [34] on the left, and the root mean squared errors, 𝜖𝑟𝑚𝑠 , versus average distance between nodes, ℎ, for 𝐹1 .

( )
3 1 1
where 𝜎𝑚𝑎𝑥 and 𝜎𝑚𝑖𝑛 are the largest and smallest singular values of + exp − (9𝑥 + 1)2 − (9𝑦 + 1)2
4 49 10
A. In our numerical experiments, we choose the interpolation points ( ))
1 1(
to be evenly distributed, while the evaluation points (test points) are + exp − (9𝑥 − 7)2 + (9𝑦 − 3)2
2 4
randomly distributed Halton quasi-points with the constraint of 𝑁𝑡 < 1 ( )
− exp −(9𝑥 − 4)2 − (9𝑦 − 7)2 .
𝑁. In a case of 𝑁 ≤ 𝑁𝑡 , we split the evaluation points into subsets and 5
perform the interpolation separately. We will explain how interpolation
is done in such cases later in this section.
The left of Fig. 4 shows the profile of test function 𝐹1 . On the right
All numerical experiments have been performed in MATLAB on a of Fig. 4, the rate of convergence is displayed for the two proposed
MacBook Pro with a 3.2 GHz Apple M1 processor and 16 GB memory. algorithms and the reference implicit method, with respect to spatial
The algorithm code has been parallelized using the Matlab Parallel discretization. From the figure, it can be seen that Algorithm 2 has the
Computing Toolbox for improving the performance. Moreover, con- highest convergence rate with the highest accuracy.
struction of the local domains by searching the nearest evaluation One of the main advantage of PS is that it does not have a shape
points has been done using the Matlab built-in function knnsearch from parameter that needs to be determined during the interpolation process.
the statistics and machine learning toolbox. However, we still need to select other parameters such as the order of
the PS RBF (𝑘), the order of the polynomial basis (𝑚), and the number of
Example 4.1. In the first example, we investigate the performances local points (𝑛). Since both algorithms behave similar and Algorithm 2
of the proposed methods on Franke’s six test functions [34] on the unit performs better in terms of accuracy without loss of computational
square [0, 1] × [0, 1]. For simplicity, we mainly show details on the test efficiency, we examine the performance of Algorithm 1 (the worse case
function 𝐹1 , the algorithms behave in a similar way on all other five scenario) as we vary 𝑘, 𝑚, and 𝑛. Note that the findings presented here
test functions. The test function 𝐹1 is provided as follows. are comparable to those of Algorithm 2.
( )) From the left of Fig. 5, we can see that interpolation error decreases
3 1(
𝐹1 (𝑥, 𝑦) = exp − (9𝑥 − 2)2 + (9𝑦 − 2)2 as the number of interpolation points increases, and three separate lines
4 4

244
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 5. RMS errors versus 𝑁 (left) and maximum condition number of local interpolation matrices versus 𝑁 (right) for various 𝑛 on 𝐹1 with 𝑘 = 4, 𝑚 = 3 in Example 4.1 using
Algorithm 1.

Fig. 6. Left: maximum errors versus the order of PS 𝑘 for 𝑁 = 1002 , 𝑁𝑡 = 9000, 𝑛 = 30 and 𝑚 = 6 using different test functions and Algorithm 1. Middle: maximum errors versus
the polynomial order 𝑚 for 𝑁 = 1002 , 𝑁𝑡 = 9000, 𝑛 = 30 and 𝑘 = 4 using different test functions. Right: CPU time (in seconds) versus 𝑚 for test function 𝐹1 using Algorithm 1 in
Example 4.1.

indicate that one may obtain even more accurate results by increasing between two points in 𝑥 and 𝑦 directions. Lastly, 𝑥𝑟𝑎𝑛𝑑 𝑖 , 𝑦𝑟𝑎𝑛𝑑
𝑖 are uni-
the number of points in the local domains. From the right of Fig. 5, we formly distributed random numbers in (0, 1). Fig. 7 shows the RMS error
observe that the ill-conditioning grows as the number of interpolation obtained using Algorithm 1 and Algorithm 2 on the left and on the right,
points increases, although the error decreases with increasing 𝑁 as respectively, for various degrees of randomness. When 𝜌 is zero, the
shown in the left of Fig. 5. points are evenly distributed and as 𝜌 increases, the interpolation points
The left of Fig. 6 indicates that the accuracy does not change become more distorted. However, the error does not change much for
significantly when the order of PS changes (about one order of mag- all three test functions. Hence, the stability with regard to the point
nitude differences). In the middle of Fig. 6, it indicates that when the distribution is validated.
order of the polynomial basis 𝑚 increases, the accuracy improves. Yet, Table 2 shows the condition number of the global sparse matrix,
one would not need to increase the order of the basis too much as and the maximum condition number of all local collocation matrices
it will lead to a high computational cost as evident from the right when using PS of order 𝑘 = 4 and 𝑚 = 6, 𝑛 = 30 on 𝐹1 for two
of Fig. 6. However, we can reduce the simulation time significantly algorithms. This case was chosen mainly because this is a general
by using parallel computing while calculating the weights associated case when reasonable accuracy and efficiency are achieved. Clearly,
with each node in the global sparse system. For instance, when using
the small collocation matrices have higher condition numbers when
𝑘 = 4, 𝑚 = 6, 𝑁 = 40,000 for 𝐹1 , the CPU time is reduced from
the number of interpolation points and evaluation points increases,
1050 s to 250 s. Note that the results from Franke’s test functions
even becoming ill-conditioned in many cases. However, the global
𝐹4 and 𝐹5 [34] are also included, and the[ test(functions are presented
( )2 ( )2 )] sparse matrices always have excellent condition numbers regardless of
1 81
below for reference: 𝐹4 (𝑥, 𝑦) = 3 exp − 16 𝑥 − 2 + 𝑦 − 12
1
which algorithm and what parameters are used. When dealing with ill-
[ (( )2 ( )2 )] conditioned small local systems, we would recommend a singular value
and 𝐹5 (𝑥, 𝑦) = 31 exp − 81
4
𝑥 − 12 + 𝑦 − 12 . Due to the relative decomposition, TSVD, or pre-conditioning techniques [20].
smoothness of those two functions in comparison with 𝐹1 , the accuracy Furthermore, when comparing the CPU time, Algorithm 1 demon-
of interpolation results from those two functions is much better than strates greater efficiency compared to Algorithm 2. By examining the
those from 𝐹1 . CPU time of the proposed methods, the most time-consuming part of
To validate the stability of the methods, we examine the sensitivity the algorithms is identified as the construction of the local matrices
of the algorithms with respect to the locations of the points. Here we while the final sparse system solving time is negligible. The algorithm
consider the perturbed grid points provided by knnsearch in MATLAB for nearest-neighbor classification is not the most
𝑥̃𝑖 = 𝑥𝑖 + 𝜌𝑥𝑟𝑎𝑛𝑑 𝛿𝑥 efficient such algorithm. The highly efficient kd-tree algorithm [35] can
𝑖
be used if desired computational efficiency is needed.
𝑦̃𝑖 = 𝑦𝑖 + 𝜌𝑦𝑟𝑎𝑛𝑑
𝑖 𝛿𝑦
The algorithms introduced in this paper are very much similar to
where (𝑥𝑖 , 𝑦𝑖 ) ∈ (0, 1)2 are uniformly distributed grid points. Let 𝜌 the global RBF interpolation methods using CS-RBF. The Wendland’s
denote the degree of randomness and 𝛿𝑥, 𝛿𝑦 are the shortest distance CS-RBFs were first introduced in [36], which is piecewise polynomial

245
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 7. Sensitivity respect to points distribution: RMS errors versus 𝜌 with 𝑁 = 1002 , 𝑁𝑡 = 902 , and 𝑛 = 30 using PS of order 𝑘 = 4 and polynomial basis of order 𝑚 = 6 in
Example 4.1.

Table 2
Condition numbers and CPU time for Algorithm 1 and Algorithm 2 using PS of order 𝑘 = 4 and 𝑚 = 6, 𝑛 = 30 on 𝐹1 in
Example 4.1.
Algorithm 1 Algorithm 2
(𝑁, 𝑁𝑡 ) (502 , 2000) (1502 , 20000) (502 , 2000) (1502 , 20000)
Condition No. of sparse matrix 2.89E+03 4.81E+04 3.78E+03 8.13E+06
Maximum condition No. of local matrices 1.89E+14 2.8405E+20 3.96E+18 1.04E+22
CPU time (s) 1.71 75.60 3.09 166.37

Table 3 Table 4
Comparison of CS-RBF method and the proposed methods with 𝑘 = 4, 𝑚 = 3, 𝑛 = 30 for Comparison of 𝜖𝑟𝑚𝑠 and 𝜖𝑚𝑎𝑥 using various RBF with 𝑁 = 1002 , 𝑁𝑡 = 9000 and 𝑛 = 30
function 𝐹1 in Example 4.1. for 𝐹6 Example 4.1.
𝑁 𝑁𝑡 CS-RBF Proposed Algorithm1 Proposed Algorithm2 RBF 𝜖𝑟𝑚𝑠 𝜖𝑚𝑎𝑥 𝑐
𝜖𝑟𝑚𝑠 CPU time 𝜖𝑟𝑚𝑠 CPU time 𝜖𝑟𝑚𝑠 CPU time Gaussian 2.15E−13 4.06E−12 4.994
MQ 1.17E−12 2.97E−11 3.208
1002 9000 7.66E−07 186.0 6.00E−08 15.10 4.02E−08 30.27
IMQ 3.29E−13 9.93E−12 2.629
1502 20000 2.01E−07 1454.8 8.34E−09 71.2 5.26E−09 136.57
Matern (order 2) 1.71E−12 3.43E−11 1.909
PS4 3.22E−14 6.26E−13 –

basis such as 𝜙(𝑟) = (1 − 𝑟)4+ (4𝑟 + 1). Interpolation with CS-RBFs globally Table 5
also resulted in a global sparse system [37]. Thus, we examine our RMS errors and CPU time of 𝐹7 interpolated with 𝑁 = 22, 500, 𝑛 = 30 and various
algorithms against CS-RBFs interpolation. It is worth mentioning that 𝑁𝑡 (𝑁 < 𝑁𝑡 ) in Example 4.2.
the CS-RBF interpolation is still a global method since the interpolation 𝑁𝑡 No. of subsets 𝜖𝑟𝑚𝑠 CPU time
matrix is created using CS-RBFs with all interpolation points as the 30,000 2 2.70E−05 126.32
centers. Table 3 shows performance in terms of accuracy and efficiency. 40,000 2 2.19E−05 151.40
In this example, PS of order 4 is used with a polynomial basis of order 50,000 3 2.14E−05 196.44
3. From previous analysis, we know that we can improve the accuracy 60,000 3 2.06E−05 220.11

using a higher-order of polynomial basis or more neighboring points.


With slightly higher accuracy, we still able to be much more efficient
in terms of computational time.
𝑘 = 4, 𝑚 = 6, 𝑁 = 1502 , 𝑁𝑡 = 20,000 and 𝑛 = 30. The absolute
Table 4 shows that the interpolations achieved with PS RBF have
error is reasonably accurate, and it can be improved by employing more
better accuracy than the other [ RBFs, ( Algorithm 1 is utilized)]
to interpo-
( )2 ( )2 points in the local domains. The proposed method can certainly be
late the test function 𝐹6 = 19 64 − 81 𝑥 − 21 + 𝑦 − 12 − 12 . The successfully used for the interpolation of challenging functions without
Leave-One-Out-Cross-Validation method (LOOCV) [25] is employed to implementing any special treatments such as grid refinements.
determine the best shape parameter associated with RBFs with shape We noticed that the higher the number of evaluation points, 𝑁𝑡 ,
parameters to ensure that we choose the ‘‘optimal’’ value to the best of the greater the accuracy of the algorithms, but 𝑁𝑡 ≤ 𝑁. However,
our ability. In LOOCV, we aim to fit data several times using a different when there 𝑁𝑡 > 𝑁, we can extract sub-samples of dimension 𝑁̄ 𝑡 ≪ 𝑁
training set (all but one data point) and testing set (one of the data which can be interpolated group by group using the proposed method.
points) each time, then calculating the test root mean squared error to These subsets need to be extracted as uniformly as possible from the
be the average of all of the test. domain to avoid any discontinuity between interpolated values (see
Fig. 9). We implement this for the function 𝐹7 with 𝑁 = 22,500, and
Example 4.2. In this example, we further investigate the performance various 𝑁𝑡 that are higher than the given 𝑁. As shown in Table 5, the
of the proposed Algorithm 1 using the function: proposed method can maintain a similar level of accuracy for all the

𝐹7 (𝑥, 𝑦) = 𝑥2 + 𝑦2 + 0.2, (20) cases despite the number of evaluation points. An increase in the CPU
time is expected as the number of sub-samples gets higher. However,
which has a singularity and is more challenging to interpolate. this algorithm is suitable for parallel implementation as these groups
Fig. 8 shows the profile of 𝐹7 in [−1, 1] × [−1, 1] on the left, and are chosen independently, thus computer running time can be reduced
a profile of absolute error for test function 𝐹7 on the right, where significantly.

246
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 8. Profile of test function 𝐹7 on the left and absolute errors of interpolation with 𝑁 = 1502 , 𝑁𝑡 = 20, 000, 𝑛 = 30, 𝑘 = 4, 𝑚 = 6 on the right in Example 4.2.

order 𝑚 = 3, and 𝑛 = 55. Fig. 12 (left) shows the profile of the absolute
errors on the surface of the Bunny and Fig. 12 (right) further confirms
that the absolute errors in the interior of the bunny remain as low as
the order of 5. The largest absolute error in all evaluation points is
within an order of 10−5 , with the majority of evaluation points having
absolute errors less than 10−6 (which can be found in the tallest bar
in the histogram). It verifies the method’s high accuracy despite the
complexity of the domain.
On the left of Fig. 13, it shows the maximum absolute errors, 𝜖𝑚𝑎𝑥 ,
versus the number of points in local domains, 𝑛. Note that the range
of the number of points in local domains is large, spanning from 25 to
1000. Regardless of the size of 𝑛, the algorithm consistently improves
in accuracy. Thus, the greater the number of points in local domains,
the higher the accuracy of the algorithm. However, we recommend
choosing 𝑛 to be less than 200, as larger values result in increased
computational time (as shown on the right of Fig. 13).
Fig. 14 shows the maximum absolute errors using Algorithm 1
Fig. 9. Set of evaluation points and consistently distributed two sets of interpolation when 4234 interpolation points and 1143 evaluation points are used in
points.
Example 4.4. The left of Fig. 14 plots 𝜖𝑚𝑎𝑥 versus the polynomial order
𝑚 when the number of points in local domains is chosen as 𝑛 = 85 and
𝑛 = 120. It can be seen from the figure that the order of the polynomials
Example 4.3. Interpolation is one of the common methods used to basis, 𝑚 should not be too large. The algorithm easily achieves an
enhance image quality. In this example, 256 × 256 pixels 2D gray-scale accuracy of order 5 when 𝑚 is as small as 2.
Lena image shown on the left of Fig. 10 is enhanced. The enhanced On the right of Fig. 14, the plot displays 𝜖𝑚𝑎𝑥 versus the order of PS,
512 × 512 pixels image is shown on the right. Note that the enhanced 𝑘 when the number of points in local domains is chosen as 𝑛 = 25, 45,
image is obtained by using Algorithm 1 with PS of order 𝑘 = 1, a and 𝑛 = 85. Please note the following: (1) as we discussed before,
polynomial of order 𝑚 = 0, and 𝑛 = 3. As the set of evaluation points increasing the number of local points improves the accuracy, hence we
is higher than the interpolation points in this problem, interpolation did not present results for 𝑛 = 120; (2) 𝑚 = 3, 𝑛 = 45 is sufficient
is performed group-wise. By maintaining a low order of polynomial to achieve reasonable accuracy, so the effect of the order of PS is not
basis and restricting the size of the local domains, we achieved visible evident; (3) when 𝑛 = 25, 𝑘 has to be small. In this case, the best
improvements to the image while keeping the computational time low. accuracy is obtained only when 𝑘 = 1.
Thus, we recommend using enough number of points in local do-
Example 4.4. The purpose of this example is to test Algorithm 1 mains to achieve high accuracy while keeping the order of polynomial
on three-dimensional space. We use a computational domain of the basis small. In addition, the order of PS does not need to be high as it
Stanford Bunny, which is more challenging and irregular. The boundary does not significantly affect the accuracy when a sufficient number of
data points for this domain are available at the website of the Stanford local points is used.
Computer Graphics Laboratory [38]. Algorithm 1 was tested on func- MATLAB code for this example is shared on MATLAB Central File
tion 𝐻 in (21), which is a trivariate test function used in [13]. The Exchange [39].
function plotted on the bunny surface can be found in Fig. 11 (right):
[ )]
1 81 ( 5. Conclusion
𝐻(𝑥, 𝑦, 𝑧) = exp − (𝑥 − 0.5)2 + (𝑦 − 0.5)2 + (𝑧 − 0.5)2 . (21)
3 16
In our numerical simulations, 𝑁 = 4234 of interpolation points In this paper, we improved two implicit localized RBF interpola-
which consist of 2345 interior points and 1889 boundary points are tion methods using polyharmonic splines and a polynomial basis for
used to interpolate 𝑁𝑡 = 1143 of test points. The computational domain scattered data interpolation that was proposed in [21]. The method
with interpolation points lying on the boundary surface is shown in uses evaluation points or a combination of evaluation and interpolation
Fig. 11 (left). Since the original bunny data scale is too small for points as the search domain to create local domains of influences
this application, the coordinates of bunny points are multiplied by 10. for each interpolation point. The resulting linear system is a sparse
Algorithm 1 is employed with PS of order 𝑘 = 4, the polynomial basis of system with the evaluation points’ function values as the unknowns.

247
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 10. Profile of the original Lena image on the left and the enhanced Lena image on the right inExample 4.3.

Fig. 11. Profile of the computational domain (Stanford Bunny) with the set of interpolation points on the boundary (left) and the profile of 𝐻 function on the surface of the
bunny (right) in Example 4.4.

Fig. 12. The profile of the absolute errors on the surface of the bunny (left) and frequency or count of data points in the interior of the bunny whose absolute error are within
each range (right) in Example 4.4.

248
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

Fig. 13. The maximum absolute error, 𝜖𝑚𝑎𝑥 versus parameter 𝑛 on the left, and corresponding CPU time on the right in Example 4.4, where 𝑁 = 4234, 𝑁𝑡 = 1143.

Fig. 14. The maximum absolute error, 𝜖𝑚𝑎𝑥 versus parameter 𝑚 on the left, parameter 𝑘 on the right in Example 4.4, where 𝑁 = 4234, 𝑁𝑡 = 1143.

The original paper claims the method ‘‘does not produce reasonable Declaration of competing interest
results when using the polyharmonic splines’’. However, we discovered
in this paper, the claims are not true. The authors declare that they have no known competing finan-
The interpolation with polyharmonic splines and low order poly- cial interests or personal relationships that could have appeared to
nomials can be solved with great accuracy and efficiency. The higher influence the work reported in this paper.
the order of polyharmonic splines or the number of points in local
domains, the better the accuracy becomes, under the assumption that Data availability
the points in local domains are sufficient. Detailed conclusions are
drawn below: (1) RBFs such as polyharmonic splines and polynomials I have shared my code on MATLAB File Exchange online.
contain no shape parameter and produce higher accuracy compared
to other RBFs; (2) Algorithm 1 is based on the construction of local Acknowledgments
influence domains entirely from the evaluation points and Algorithm 2
takes both interpolation and evaluation points into consideration. Both The second author and the fourth author, Guangming Yao, and
algorithms were found to be very attractive, easy to use in 2D and 3D Gantumur Tsogtgerel, acknowledge the support of the Simons CRM
problems, and mostly able to overcome the downsides of global RBF Scholar-in-Residence Program. The third author, Jing Niu, acknowl-
interpolation using polyharmonic splines and polynomial basis; (3) The edges the support of the National Natural Science Foundation of China
computational complexities of Algorithm 1 and Algorithm 2 are very via grant 12101164. The fourth author, Gantumur Tsogtgerel, was also
close. Additionally, Algorithm 2 is more accurate than Algorithm 1 but supported by an NSERC Discovery Grant, and by the fellowship grant
P2021-4201 of the National University of Mongolia.
we mainly focused on Algorithm 1 due to its minor improvement of
efficiency.
References
In summary, our proposed interpolation algorithms can deal with
high-dimensional data interpolation on complicated domains, without [1] Steffensen Johan Frederik. Interpolation. Courier Corporation; 2006.
the hassle of searching for shape parameters and without loss of accu- [2] Davis Philip J. Interpolation and approximation. Courier Corporation; 1975.
racy and can be parallelized easily. This is a great improvement in terms [3] Lunardi Alessandra. Interpolation theory, Vol. 9. Springer; 2009.
of computational efficiency. Future work concerns deeper theoretical [4] Szabados József, Vértesi Péter. Interpolation of functions. World Scientific; 1990.
[5] Skala Vaclav. RBF interpolation with CSRBF of large data sets. Procedia Comput
analysis of the convergence analysis and error estimates. In addition,
Sci 2017;108:2433–7.
we aim to further reduce the computational complexity of Algorithm 2 [6] Romani Lucia, Rossini Milvia, Schenone Daniela. Edge detection methods based
and apply it to classification problems. on RBF interpolation. J Comput Appl Math 2019;349:532–47.

249
K. Rubasinghe et al. Engineering Analysis with Boundary Elements 156 (2023) 240–250

[7] Chen Chuin-Shan, Noorizadegan Amir, Young DL, Chen CS. On the selection of [22] Micchelli Charles A. Interpolation of scattered data: distance matrices and
a better radial basis function and its shape parameter in interpolation problems. conditionally positive definite functions. Constr Approx 1986;2:11–22.
Appl Math Comput 2023;442:127713. [23] Buhmann Martin D. Radial basis functions: theory and implementations.
[8] Franke Richard, Nielson Gregory M. Scattered data interpolation and applica- Cambridge Monogr Appl Comput Math 2003;12:147–65.
tions: A tutorial and survey. In: Hagen Hans, Roller Dieter, editors. Geometric [24] Rippa Shmuel. An algorithm for selecting a good value for the parameter c in
modeling. Berlin, Heidelberg: Springer Berlin Heidelberg; 1991, p. 131–60. radial basis function interpolation. Adv Comput Math 1999;11(2–3):193–210.
[9] Hardy Rolland L. Multiquadric equations of topography and other irregular [25] Fasshauer Gregory E, Zhang Jack G. On choosing ‘‘optimal’’ shape parameters
surfaces. J Geophys Res 1971;76(8):1905–15. for RBF approximation. Numer Algorithms 2007;45(1–4):345–68.
[10] Lazzaro Damiana, Montefusco Laura B. Radial basis functions for the mul- [26] Kansa EJ, Carlson RE. Improved accuracy of multiquadric interpolation using
tivariate interpolation of large scattered data sets. J Comput Appl Math variable shape parameters. Comput Math Appl 1992;24(12):99–120.
2002;140(1):521–36, Int. Congress on Computational and Applied Mathematics [27] Fornberg B, Wright G. Stable computation of multiquadric interpolants for all
2000. values of the shape parameter. Comput Math Appl 2004;48(5–6):853–67.
[11] Renka Robert J. Multivariate interpolation of large sets of scattered data. ACM [28] Chen CS, Karageorghis Andreas, Zheng Hui. Improved RBF collocation meth-
Trans Math Software 1988;14(2):139–48. ods for fourth order boundary value problems. Commun Comput Phys
[12] Wendland Holger. Scattered data approximation. Cambridge monographs on 2020;27:1530–49.
applied and computational mathematics, Cambridge University Press; 2004. [29] Zheng Hui, Lu Xujie, Jiang Pengfei, Yang Yabin. Numerical simulation of 3D
[13] Bozzini Mira, Rossini Milvia. Testing methods for 3D scattered data interpolation. double-nozzles printing by considering a stabilized localized radial basis function
Multivar Approx Interpolat Appl 2002;20. collocation method. Addit Manuf 2022;58:103040.
[14] Beatson Richard K, Light WA, Billings S. Fast solution of the radial basis function [30] Fedoseyev AI, Friedman MJ, Kansa EJ1883578. Improved multiquadric method
interpolation equations: Domain decomposition methods. SIAM J Sci Comput for elliptic partial differential equations via PDE collocation on the boundary.
2001;22(5):1717–40. Comput Math Appl 2002;43(3–5):439–55.
[15] Ling Leevan, Kansa Edward J. Preconditioning for radial basis functions with [31] Liu Chein-Shan, Liu Dongjie. Optimal shape parameter in the MQ-RBF by
domain decomposition methods. Math Comput Modelling 2004;40(13):1413–27. minimizing an energy gap functional. Appl Math Lett 2018;86:157–65.
[16] Fasshauer Gregory E, Zhang Jack G, Fasshauer GE. Preconditioning of radial basis [32] Zheng Hui, Yao Guangming, Kuo Lei-Hsin, Li Xinxiang. On the selection of a good
function interpolation systems via accelerated iterated approximate moving least shape parameter of the localized method of approximated particular solutions.
squares approximation. 2009. Adv Appl Math Mech 2018;10(4):896–911.
[17] Fornberg Bengt, Larsson Elisabeth, Flyer Natasha. Stable computations with [33] Iske Armin. On the approximation order and numerical stability of local Lagrange
Gaussian radial basis functions. SIAM J Sci Comput 2011;33:869–92. interpolation by polyharmonic splines. In: Modern developments in multivariate
[18] Floater Michael S, Iske Armin. Multistep scattered data interpolation us- approximation. Springer; 2003, p. 153–65.
ing compactly supported radial basis functions. J Comput Appl Math [34] Franke Richard. Scattered data interpolation: tests of some methods. Math
1996;73(1):65–78. Comput 1982;38(157):181–200.
[19] Flyer Natasha, Fornberg Bengt, Bayona Victor, Barnett Gregory A. On the role of [35] Bentley Jon Louis. Multidimensional binary search trees used for associative
polynomials in RBF-FD approximations: I. Interpolation and accuracy. J Comput searching. Commun ACM 1975;18(9):509–17.
Phys 2016;321:21–38. [36] Wendland Holger. Piecewise polynomial, positive definite and compactly sup-
[20] Hansen Per Christian. MATLAB central file exchange. Regularization tools: A ported radial functions of minimal degree. Adv Comput Math 1995;4(1):389–96.
MATLAB package for analysis and solution of discrete ill-posed problems. Version [37] Fasshauer Gregory E, McCourt Michael J. Kernel-based approximation methods
4.1.. 2023. using matlab, Vol. 19. World Scientific Publishing Company; 2015.
[21] Yao Guangming, Duo Jia, Chen CS, Shen LH. Implicit local radial basis function [38] Stanford Computer Graphics Laboratory. The Stanford 3D scanning repository.
interpolations based on function values. Appl Math Comput 2015;265:91–102. [39] Yao Guangming. Polyharmonic splines interpolation on scattered data. 2023.

250
Update
Engineering Analysis with Boundary Elements
Volume 160, Issue , March 2024, Page 298

DOI: https://doi.org/10.1016/j.enganabound.2024.01.006
Engineering Analysis with Boundary Elements 160 (2024) 298–298

Contents lists available at ScienceDirect

Engineering Analysis with Boundary Elements


journal homepage: www.elsevier.com/locate/enganabound

Corrigendum to “Polyharmonic splines interpolation on scattered data in


2D and 3D with applications” [Engineering Analysis with Boundary
Elements 156 (2023) 240-250]
Kalani Rubasinghe a, Guangming Yao a, *, Jing Niu b, **, Gantumur Tsogtgerel c, d
a
Department of Mathematics, Clarkson University, Potsdam, NY 13699-5815, USA
b
Department of Mathematics, Harbin Normal University, Harbin, China
c
Department of Mathematics and Statistics, McGill University, Montréal, Québéc, H3A 2K6, Canada
d
Department of Physics, National University of Mongolia, Ulan Bator, Mongolia

The authors regret that the fourth author, Gantumur Tsogtgerel’s Department of Physics, National University of Mongolia, Ulan Bator,
affiliation had been listed erroneously and incomplete. His affiliation Mongolia
should also include the following: The authors would like to apologise for any inconvenience caused.

DOI of original article: https://doi.org/10.1016/j.enganabound.2023.08.001.


* Corresponding author. Guangming Yao, Department of Mathematics, Clarkson University, Potsdam, NY 13699-5815, USA
** Corresponding author.
E-mail addresses: [email protected] (G. Yao), [email protected] (J. Niu).

https://doi.org/10.1016/j.enganabound.2024.01.006

Available online 13 January 2024


0955-7997/© 2024 Elsevier Ltd. All rights reserved.

You might also like