A Physics-Constrained Data-Driven Approach Based On Locally Convex Reconstruction For Noisy Database
A Physics-Constrained Data-Driven Approach Based On Locally Convex Reconstruction For Noisy Database
com
ScienceDirect
Abstract
Physics-constrained data-driven computing is an emerging hybrid approach that integrates universal physical laws with
data-driven models of experimental data for scientific computing. A new data-driven simulation approach coupled with a
locally convex reconstruction, termed the local convexity data-driven (LCDD) computing, is proposed to enhance accuracy and
robustness against noise and outliers in data sets in the data-driven computing. In this approach, for a given state obtained by
the physical simulation, the corresponding optimum experimental solution is sought by projecting the state onto the associated
local convex manifold reconstructed based on the nearest experimental data. This learning process of local data structure is
less sensitive to noisy data and consequently yields better accuracy. A penalty relaxation is also introduced to recast the local
learning solver in the context of non-negative least squares that can be solved effectively. The reproducing kernel approximation
with stabilized nodal integration is employed for the solution of the physical manifold to allow reduced stress–strain data at the
discrete points for enhanced effectiveness in the LCDD learning solver. Due to the inherent manifold learning properties, LCDD
performs well for high-dimensional data sets that are relatively sparse in real-world engineering applications. Numerical tests
demonstrated that LCDD enhances nearly one order of accuracy compared to the standard distance-minimization data-driven
scheme when dealing with noisy database, and a linear exactness is achieved when local stress–strain relation is linear.
⃝c 2019 Elsevier B.V. All rights reserved.
Keywords: Data-driven computing; Locally convex reconstruction; Manifold learning; Noisy data; Local convexity data-driven (LCDD);
Reproducing kernel (RK) approximation
1. Introduction
With the proliferation of high-resolution datasets and the significant advances in numerical algorithms, the
emerging idea by utilizing both data-driven models and physical models simultaneously to enhance traditional
scientific computing and engineering design procedures [1,2] has attracted increasing attention. This general
approach is usually termed as data-driven modeling [3] or data-driven engineering science. Data-driven modeling
has a close connection with the various areas such as statistics, data mining, and machine learning, which allow the
extraction of insightful information or the hidden structures from large volumes of data [4] for enhanced scientific
computing. The data-driven approaches, such as machine learning [5,6], have been widely applied to computational
biological [7] and medical diagnosis [8], material informatics [9,10], and other predictive physics problems [4,11].
∗ Correspondence to: Structural Engineering Department, University of California, San Diego (UCSD), La Jolla, CA 92093, USA
E-mail address: [email protected] (J.-S. Chen).
https://doi.org/10.1016/j.cma.2019.112791
0045-7825/⃝ c 2019 Elsevier B.V. All rights reserved.
2 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Recently, these approaches have been extended to the field of engineering mechanics, such as learning constitutive
models in solid mechanics [12–14], surrogate models in fluid mechanics [15–17] and physical models or governing
equations purely extracted from the collected data [18–20]. In conjunction with machine learning techniques such
as manifold learning [21] or neural networks [22], the recent studies [23–25] offer a new paradigm for data-driven
computing for various applications such as design of materials [26]. There is a vast body of literature devoted
to these subjects, including the recent developments based on nonlinear dimensionality reduction [24], nonlinear
regression, deep learning [27–29], among others.
Nevertheless, pure data-driven methodology in the area of simulation-based engineering sciences (SBES) [30]
is ineffective since in many physical systems well-accepted physical laws exist while useful data in SBES are very
expensive to acquire [20,31]. Thus, it is imperative to develop data-driven simulation approaches that can leverage
the physical principles with limited data for highly complex systems. A solution to develop effective predictive
models for complex real-world problems is to combine physics-based models with data-driven techniques under a
hybrid computational framework. There are three types of hybrid physics-data approaches, depending on the roles
of physics laws and data play in the hybrid model. The first approach enforces known physical constraints into
data-driven models [32,33], which can be considered as a data-fit type surrogate model. In the second approach, on
the contrary, the existing physical models are enriched by the information learned from data. This general framework
can be used for obtaining data-enhanced physical models [34,35], online updating dynamical system in a manner
similar to data assimilation [36], or constructing reduced-order model [25,37–40]. The third approach is to apply
data-driven models and physical models separately to approximate different aspects of the physical system and be
connected consistently to perform numerical simulation.
The third class of methods is particularly attractive for computational mechanics because it preserves the well-
accepted physical laws under the variational framework and the prerequisite of big data is much relaxed. In contrast,
most other data-driven methods for solid or fluid simulation directly construct machine learning surrogates to link
the input–output relation with approximated physics laws [17,24,33] or with the constitutive models replaced by
supervised learning models such as neural networks [12–14,27], which could over-parameterize the material relation
and lead to numerical instability. In addition, the setting of training and architecture hyperparameters for neural
networks is not straightforward.
Under the framework of the third approach mentioned above, Kirchdoerfer and Ortiz [41–43] have proposed
a material model-free data-driven method, so called distance-minimizing data-driven computing (DMDD), for
modeling elasticity problems. This data-driven method enforces equilibrium and compatibility and directly utilizes
the material database, e.g. stress and strain data, under a modified variational framework, aiming to eschew the
empirical models that inevitably involve incomplete experimental information [41,42] and the process of material
parameter identification [44–46] that remains numerically intractable. In DMDD, the data-driven problem is solved
by minimizing the distance between the computed physical solutions (i.e. the set of equilibrium admissible stress
and kinematically admissible strain jointly) and a given set of experimental data under a proper energy norm.
A similar idea was proposed by Ibañez et al. [47] where manifold learning techniques are applied to material
database to construct the tangent stiffness approximation of constitutive relation, with which the convergent solution
could be attained by using directional search solvers [48,49]. In these methods, the data selection or automated
machine learning techniques on material data are carried out during the computation of the associated initial-
boundary-value problem, thus bypassing the traditional construction of constitutive models. Again, these methods
fall into the third class of the data-driven approaches discussed above, and they are usually defined as data-driven
computational mechanics (DDCM) [41,47]. This data-driven paradigm has been recently extended to dynamics [43],
nonlinear elasticity [35,50–52], material identification [53], and data completion [54]. Overall, the key idea of the
above mentioned methods is to seek the intersection of the hidden constitutive (material) manifold represented
by experimental data and the physical manifold by using iterative processes with appropriate search directions, as
shown in Fig. 1.
Despite the major advancement made in the field, it remains challenging in dealing with noisy and sparse
data [1]. The standard DMDD paradigm [41] is shown sensitive to noisy data and outliers [42,55], while the
approaches based on manifold learning [47] or local regression [56] may fail to converge due to the over-relaxed
manifold construction and lack of convexity. To enhance robustness, the DMDD approach was extended to the
max-ent data driven computing [42], which utilizes entropy estimation to analyze the statistics information of data.
However, as simulated annealing algorithms are used to solve the resulting data-driven minimization problem,
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 3
Fig. 1. Schematics of data-driven computing for predictive modeling, where a set of data points {ŝi , i = 1, 2, . . .} are given to represent
the material behavior, E is an imaginary manifold of the material database representing the underlying constitutive relation, and C is the
physical manifold of admissible stress and strain states, s = (ε, σ ), satisfying equilibrium and compatibility. The data-driven solution s∗ is
solved by a fixed point iteration that searches the physical states s(v) ∈ C and the local data solutions ŝ∗(v) ∈ E via iterative projections,
where the subscript ‘v’ is the step indicator. In this study, only stresses and strains are considered in the material database.
the solution procedures become computationally intensive. Alternatively, Ayensa-Jimenez et al. [55] proposed to
consider data uncertainty by explicitly incorporating statistical quantities into the standard DMDD approach and
defined a stochastic analogous problem. In this approach, the expected value and variance–covariance matrix need
to be estimated and the local data structure is still not taken into consideration and hence it becomes ineffective in
dealing with high-dimensional dataset.
In this paper, we propose a novel data-driven approach which utilizes the intrinsic local data structure to enhance
accuracy and robustness against noisy data while computationally feasible. By assuming that each data point and
its neighbors lie on or close to a locally linear patch of the manifold, the proposed approach approximates the
underlying constitutive manifold near the physically admissible state by locally constructing a convex envelop based
on the associated experimental neighbor data points. As a result, the proposed approach can utilize the local data
structure without explicitly constructing the local manifold or regression models needed in the approaches [47,55,56]
utilizing the associated tangent spaces for data-driven iterations. With this locally convex construction, the solution
space for searching optimum local data is regularized onto a bounded, continuous, and convex subset (polytope) for
enhanced robustness and convergence stability in data-driven computing. The proposed approach is, thus, referred
to as local convexity data-driven (LCDD) computing. In this approach, a cluster of experimental data associated
with the physical solution (e.g., the pair of strains and stresses) is first identified by the k-nearest neighbor (k-NN)
algorithm, and the optimum data solution is searched within the associated locally convex hull instead of the discrete
material set. To solve this local search problem efficiently, we recast the approach into a non-negative least squares
(NNLS) problem [57] by introducing the invariance constraint into the objectivity function. Because of the inherited
manifold learning capacity in the NNLS solvers, the proposed LCDD permits the locally linear approximation for
the underlying material manifold, which means that LCDD could reproduce the solutions given by the classical
model-based simulation if the constitutive relation represents a locally linear pattern.
On the other hand, LCDD can be viewed as a generalization of DMDD by equipping a suitable manifold learning
technique that naturally takes the local data information into account and retains a simple computing framework.
In the solution phase on the physical manifold, a constrained minimization problem is solved by introducing
a reproducing kernel approximation (RK) [58,59] in conjunction with a stabilized conforming nodal integration
(SCNI) [60] such that the displacements, stresses and strains are computed at the nodal points. This approach
significantly reduces the needed search for the optimal stress–strain data from the data set. The employment of the
RK approximation also introduces higher order smoothness to the solution space of the physical manifold, making
it consistent to the continuous and convex solution space of the regularized LCDD learning solver. It is noted that
there is no assumption of isotropy and homogeneity in the proposed LCDD data-driven computational framework.
The learning algorithm can identify the intrinsic properties of the given dataset. In this study, we only consider the
4 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
modeling of homogeneous material, that is, the same dataset is used to characterize the material behavior at every
evaluation point over the domain.
The objective of the present work is to study the main issues of data-driven approaches when dealing with
noisy data in high-dimensional space. The paper is organized as follows. In Section 2, a generalized data-driven
computational formalism is reviewed. In Section 3, locally convex reconstruction is introduced and the local
manifold learning for data-driven solver formulated under the NNLS framework is presented. Section 4 provides
numerical tests of truss structures to demonstrate the effectiveness of LCDD against noisy data. In Section 5,
continuum mechanics with elastic solid is considered to assess the accuracy and convergence properties of LCDD
when the noisy data is of high-dimensional phase space. Finally, concluding remarks and discussions are given in
Section 6.
Remark 2.1. For modeling homogeneous material, the same dataset E is used to characterize the material behavior
at every evaluation point over the domain. Note that this data-driven computing can be applied to heterogeneous
materials if space-dependent databases are available. In this approach, there is no predefined material models or
material parameters that need to be identified, which makes data-driven computing different from the classical
simulation methods and material identification problems.
It is convenient to introduce the notion of phase space Z as the space of the strain–stress pairs (ε, σ ), and denote
C as the admissible set for elements (ε, σ ) ∈ Z that satisfy the physical constraints in (1) and (2), which is also
called the physical manifold. Ideally, the data-driven solution is the intersection of the global data set Eem and the
physical manifold set C, i.e. Eem ∩ C, where Eem = E × · · · × E ⊂ Z denotes the ensemble of the experimental set E
over Ω . Since E consists of a finite set of discrete data points which could lead to non-existence of the intersection
Eem ∩ C, a distance-minimizing relaxation is usually employed.
Data-driven computing [41,47] and data-enabled applications, such as dynamics data-driven application systems
(DDDAS) [1] and parameter identification for pre-defined material models [44–46], introduce distance-minimization
between the simulation data and the measurement data. The main difference between these approaches is that data-
driven computing is a forward problem while parameter identification is an inverse problem for material calibration.
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 5
We refer interested readers to the literatures for more details of parameter identification [44–46]. Data-driven
computing can be stated as one of the following double-minimization problems:
min min H(u, σ , ε̂, σ̂ ) or min min H(u, σ , ε̂, σ̂ ) (3)
(ε̂,σ̂ )∈Eem (u,σ )∈Cu ×Cσ (u,σ )∈Cu ×Cσ (ε̂,σ̂ )∈Eem
where H is a given functional to define a distance measure, which is to be elaborated in the next section, and
Cσ and Cu denote the sets of equilibrium admissible stress fields and kinematically admissible displacement fields,
respectively, i.e.,
Cσ = τ ∈ Vσ |∇ · τ + b = 0 in Ω , and τ · n = t on Γt ,
{ }
(4a)
Cu = {v ∈ Vu | v = u on Γu } , (4b)
in which Vσ = [L 2 (Ω )]6 is the symmetric stress space, and Vu = [H 1 (Ω )]3 is the displacement space. Then the
physical manifold set is defined as
C = {(ε[u], σ )|u ∈ Vu , σ ∈ Vσ } . (4c)
Note that strain ε is obtained from the displacement u ∈ Cu using (2a), which is denoted by ε = ε[u].
Corresponding to the strain–stress state (ε, σ ) ∈ C obtained from the physical manifold C, (ε̂, σ̂ ) ∈ Eem is used
to denote the data from the experimental data set Eem . As illustrated in Fig. 1, the data-driven computing in (3) is
to find the state (ε, σ ) constrained to the physical set C while closest to the dataset Eem under a certain “distance”
measure defined by the functional H, such that the system response is determined directly from the experimental
data without specifying any constitutive models.
The data-driven problem in (3) can be decomposed into a two-step problem:
Global step : J (ε̂, σ̂ ) = min H(u, σ , ε̂, σ̂ ), (5a)
(u,σ )∈Cu ×Cσ
where (ε̂ ∗ , σ̂ ∗ ) is the optimum experimental point closest to the computed state (ε, σ ) given in (5a). From an
optimization perspective, the solution procedures of this data-driven problem involve an alternate-direction search
where a minimization with respect to (u, σ ) is followed by a minimization with respect to (ε̂, σ̂ ), denoted as a
global step and a local step, respectively.
Compared to the problem setting in material parameter identification [45], the data-driven computing in (5) does
not rely on any pre-assumed elasticity tensor to relate ε and σ . Instead, it iteratively searches a representative
stress–strain pair from the experimental dataset for performing simulation.
The norm ∥ · ∥Z associated to the phase space Z has been defined as a combination of the energy-like and
complementary energy-like functional [41] as follows:
∫
1
∥(ε, σ )∥2Z = ε : M ε : ε + σ : M σ : σ dΩ , (6)
2 Ω
where M ε and M σ are two tensors to balance the contribution of the strain and stress data measured in different
physical units.
For numerical implementation, the state variables (ε, σ ) are computed at integration points (εα , σ α ) =
(ε(x α ), σ (x α )) ∈ Rq ×Rq , where {x α }m
α=1 are the coordinates of the m integration points (i.e., stress–strain evaluation
points) and q is the dimension of stress and strain. As such, we denote {(εα , σ α )}m h h
α=1 ∈ Z , where Z ⊂ Z is the
discrete counterpart of the phase space. Correspondingly, the distance minimization in the local step searches for
the local data solution (ε̂α , σ̂ α ) ∈ E at every integration point x α , α = 1, . . . , m. In the subsequent discussion, we
define sα = [εαT σ αT ]T ∈ R2q and ŝα = [ε̂αT σ̂ αT ]T ∈ R2q as the computational and experimental strain–stress pairs,
respectively, in the local phase space.
6 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
A functional H defined as the discrete form of (6) to measure the distance between {(εα , σ α )}m
α=1 and
{(ε̂α , σ̂ α )}m
α=1 is given as
m
∑
H(u, σ , ε̂, σ̂ ) = ∥(ε − ε̂, σ − σ̂ )∥2Z ≈ d 2 (sα , ŝα )Vα , (7)
α=1
where {Vα }m
α=1 are the quadrature weights associated with the m integration points, and
Here M εα ∈ Rq×q and M σα ∈ Rq×q are symmetric and positive-definite coefficient matrices for multivariate
distance measures, and usually M σα = M εα . One approach for selecting the coefficient matrices is by computing
( )−1
the covariance of the material data set and using the so-called Mahalanobis distance for multivariate data, as
proposed in [55]. Investigating the effect of coefficient matrices is out of the scope of this study. Numerical examples
show that by using the proposed locally convex construction scheme with a coefficient matrix representing linear
elasticity, which can be extracted from stress–strain dataset at small strain range, satisfactory data-driven results are
achieved.
where λ and η are the Lagrange multipliers in proper function spaces. The Euler–Lagrange equation of (11) reveals
η = −λ on Γt [62]. Considering Eqs. (6)–(9), and ε = ε[u], the variational form is
∫
δL D D (u, σ , λ) = δε[u] : M ε : (ε[u] − ε̂) + δσ : (M σ : (σ − σ̂ )) dΩ
( )
Ω
∫ ∫ ∫
− δσ : ε[λ]dΩ + (δσ · n) · λdΓ − δε[λ] : σ dΩ (12)
Ω Γu Ω
∫ ∫
+ δλ · bdΩ + δλ · tdΓ ,
Ω Γt
Note that λ = 0 on Γu has been introduced. In this study displacement u, the Lagrange multipliers λ and stress σ
are approximated by
N
∑
u(x) ≈ uh (x) = Ψ I (x)d I , (14a)
I =1
∑N
λ(x) ≈ λh (x) = Ψ I (x)Λ I , (14b)
I =1
∑ m
σ (x) ≈ σ h (x) = χα (x)σ α , (14c)
α=1
where N is the number of discretization nodes, m is the number of stress–strain evaluation points at x α , {d I } NI=1 are
the nodal displacement vectors, {Λ I } NI=1 are the nodal Lagrange multiplier vectors, χα (x) is an indicator function
such that χα (x) = 1 if x ∈ Ωα and χα (x) = 0 if x ∈ / Ωα , where Ωα is the subdomain associated to the integration
point x α . Here, we employ (Ψ I ) NI=1 the reproducing kernel (RK) shape functions [58,59] constructed using the
cubic-B splines kernel function and linear basis functions. The introduction of RK approximation is summarized in
Appendix B. Stress in (13c) is discretized by a collocation approach in (14c). Thus, the discrete form of Eq. (13c)
yields
m
( N
) m
σ
Vα δσ αT M σα σ̂ α ,
∑ ∑ ∑
Vα δσ α M α σ α −
T
Bα I ΛI = (15)
α=1 I =1 α=1
where {Vα }m N
α=1 are the quadrature weights as defined in (7), and { f I } I =1 are the nodal force vectors associated
with the employed RK approximation of body force b and surface traction t. It can be seen that {d I } NI=1 are
solved from (16a) directly, and {Λ I } NI=1 represent the displacement adjustment related to the difference between the
computational stress and the given stress data {σ̂ α }m α=1 , as shown in Eq. (16c). Plugging (16c) into (16b) it yields
N
( m
) m
Vα B αT I M σα −1 B α J Λ J = f I −
∑ ∑ ∑
Vα B αT I σ̂ α , I = 1, . . . , N , (17a)
J =1 α=1 α=1
In summary, Eqs. (16a), (17a) and (17b) constitute the global step of the data-driven solver. In each global step,
the displacement vector {d I } NI=1 is obtained from strain data {ε̂α }m
α=1 by complying with compatibility, while the
displacement adjustment {Λ I } NI=1 is driven by the force residuals between the external force and the internal force
computed by the experimental stress data {σ̂ α }m α=1 , as shown in (17a).
In this study, we propose to use a stabilized conforming nodal integration (SCNI) [60] for the integration of
the weak form (13) due to its nodal representation nature of both state and field variables at nodal points. The
8 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
brief summary about SCNI can be found in Appendix B. In this approach, the continuum domain is partitioned by
a Voronoi diagram (see Fig. 15), and both the state variables {(εα , σ α )}α=1 N
= {(ε(x α ), σ (x α ))}α=1
N
and the nodal
N N N
displacement vectors {u I } I =1 = {u(x I )} I =1 are computed at the set of nodes located at {x α }α=1 , i.e. m = N . This
approach minimizes the number of integration points where the stress and strain experimental data are searched
in the local step (5b), allowing an enhanced effectiveness in the learning solver. The introduction of a smooth
reproducing kernel (RK) shape function in the displacement approximation in (14a), such as the employment of
a cubic B-spline in the RK approximation in Eqs. (B.1)–(B.7) in Appendix B, yields a C 2 continuous strain–
displacement matrix B I (x α ), and consequently smooth tangent matrices in the displacement adjustment and stress
update equations in (17a) and (17b), respectively. This smooth solution space of the physical manifold is made
consistent with the continuous and convex solution space of the regularized LCDD learning solver to be introduced
in Section 3.
for α = 1, . . . , m.
Remark 2.2. It has been observed that distance minimizing data-driven (DMDD) computing solver [41] with
distance measure in (7)–(9) is sensitive to data noise and outliers [42,55] because the local minimization stage (18)
only searches for the nearest data point from the given experimental data set regardless of any latent data structure.
The data-driven solution could be strongly influenced by the outliers locating near to the physical manifold C but
do not conform to the hidden material data pattern (or the latent statistical model) of E. Without the knowledge of
the underlying data manifold, it requires a large amount of data to achieve sufficiently accurate predictions which
is costly [20,31].
In this section, we introduce the LCDD approach, which introduces the locally convex reconstruction technique
inspired by manifold learning strategies [21,63,64]. Our aim in this work is to develop a computationally feasible
data-driven predictive modeling framework to enhance accuracy and robustness against noise and outliers in the
experimental data set by constructing the local manifold with the desired smoothness and convexity.
For ease of exposition, we define a weighted vector norm “∥ · ∥ M ” based on (8) as follows
1/2
∥sα ∥2M = sαT M α sα ≡ ∥M α sα ∥2 , (19)
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 9
1/2
where sα = [εαT σ αT ]T ∈ R2q , M α = diag([M εα , M σα ]), and M α can be determined by the singular value
decomposition of M α . For a given sα , the local step (18) is rewritten as
for α = 1, . . . , m.
It has been shown [21,63,65–67] that naturally occurring data usually reside on a lower dimensional submanifold
which is embedded in the high-dimensional ambient space, as shown in Fig. 2. In this study, inspired by locally
linear embedding (LLE) approach [63], we assume there exists an underlying manifold of low dimensionality
corresponding to the raw experimental data set, i.e. E = {ŝi , i = 1, . . . , p} where p is the number of data points,
that is locally linear and smooth varying. Therefore, a data point ŝi ∈ E can be linearly reconstructed from its
neighbors in the data set, i.e.
∑
ŝi ≈ ŝri econ = wi j ŝ j , (21)
j∈Nk (ŝi )
where ŝri econ is the reconstruction of ŝi , Nk (ŝi ) is the set of the k nearest neighbor (k-NN) data points to ŝi in E,
and wi j are the unknown coefficients. In LLE, the optimal reconstruction weights wi∗j can be obtained by solving
the following problem:
p p
∑ ∑
{wi∗j }i, j=1,..., p = arg min ∥ŝi − wi j ŝ j ∥2
i=1 j=1, j̸=i
p
∑ (22)
subject to : wi j = 1, i = 1, . . . , p
j=1
wi j = 0 if j ∈
/ Nk (ŝi )
Note that wi j = 0 when i = j. The data reconstruction procedures in (21) and (22) provide the projection of ŝi ,
i.e. ŝri econ , onto the subspace spanned by {ŝ j } j∈Nk (ŝi ) with respect to the norm.
Different from the standard LLE, the search for solution data point in data-driven computing is constrained by
the physical manifold associated with (1) and (2). But considering that the data-driven algorithm in (5) performs
a fixed point iteration on the experimental data points that are closest to the physical manifold, the locally
linear reconstruction remains suitable for this scenario. In this sense, it is similar to the out-of-sample extension
problem [64,68,69] where new projected data points (new samples) are added to a previously learnt low-dimensional
embedding, as shown in Fig. 2. In addition, from physical perspective, the data-driven solution constrained by the
physics laws needs to be close enough to the graph of the experimental data with underlying constitutive data
structure. Thus, we need to prevent the reconstructed data point by (21) from projecting to a point that is far away
from the underlying material data structure on the embedded subspace. To this end, we propose a local manifold
learning algorithm to reconstruct the given local state on the locally convex manifold of the experimental data set.
Given a local state sα , the most representative k nearest neighbor (k-NN) points in E are first identified using
the same metric induced by the given norm “∥ · ∥ M ”, and collected as {ŝi }i∈Nk (sα ) ⊂ E, in which the indices for
the nearest neighbors of sα are stored in a set Nk (sα ). Then we project the local state onto the convex hull of
{ŝi }i∈Nk (sα ) associated to sα , which is defined as:
⎧ ⎫
⎨ ∑ ⏐ ∑ ⎬
E(sα ) = Conv({ŝi }i∈Nk (sα ) ) = wi ŝi ⏐ wi = 1, and wi ≥ 0, ∀i ∈ Nk (sα ) , (23)
⏐
⎩ ⎭
i∈Nk (sα ) i∈Nk (sα )
10 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 2. Schematic of a manifold embedded in the original space and the associated low-dimensional embedding, where the training samples
and the new sample are denoted by gray circles and red star, respectively.
or concisely denoted as Eα . Accordingly, the optimal reconstruction coefficients are given by solving the following
minimization problem:
∑
w ∗α = arg min ∥sα − wi ŝi ∥2M
w∈Rk
i∈Nk (sα )
∑
subject to : wi = 1, (24a)
i∈Nk (sα )
wi ≥ 0, ∀i ∈ Nk (sα ),
where w ∈ Rk denotes the vector consisting of the weights {wi }i∈Nk (sα ) corresponding to the k selected neighbor
points, and w∗α with the subscript α denotes the optimal weights corresponding to the given local state sα . The
reconstruction ŝ∗α can be retrieved by using the linear combination of {ŝi }i∈Nk (sα ) with the computed weight vector
w ∗α as follows
∑
ŝ∗α = w ∗j ŝ j = Ŝα w ∗α , (24b)
j∈Nk (sα )
where Ŝα ∈ R2q×k is the matrix composed of the k-NN data points {ŝi }i∈Nk (sα ) . This approach in (24) is called locally
convex construction. Compared to Eq. (22), the main differences in (24a) are: 1. a new data point sα obtained from
the physical solver, instead of other points in the experimental data set, is used for local construction; 2. a weighted
vector norm “∥ · ∥ M ” representing energy is adopted for distance measure.
Based on the idea of locally convex construction, the local step of data-driven computing in (20) is modified as:
Given the data-set neighbors {ŝi }i∈Nk (sα ) ⊂ E for sα , solve ŝ∗α such that
ŝ∗α = arg min ∥sα − ŝα ∥2M , (25)
ŝα ∈Eα
for α = 1, . . . , m. By comparing (20) and (25), we can observe that the space E used in the standard data-driven
scheme [41,55] is now replaced by the associated convex hull Eα that is locally reconstructed around the input sα by
learning techniques, allowing to capture the local material manifold. Consequently, the reconstruction data (i.e., the
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 11
Fig. 3. Sketch of the projection ŝ∗α (the blue square) on a convex hull Eα (the region is depicted by red dashed lines) of k-NN points (the
solid circles in black) when a local state sα (the red star) locates (a) inside and (b) outside Eα . Neighbor points of k = 6 are used for
demonstration.
optimal local data) ŝ∗α is sought from the set Eα with convexity and smoothness. With the definition in (23), the
solution of the minimization problem (25) is obtained by solving (24).
Remark 3.1. Eq. (24a) is a constrained regression or constrained least-squares problem under a invariance
constraint and a non-negative constraint. The invariance constraint imposes the partition of unity on the weight
array w, i.e. 1T w = 1, where 1 = [1, 1, . . . , 1]T ∈ Rk . It ensures the invariance of the reconstruction weights w ∗α
to rotations, rescaling, and translations of the same k-NN data points, and thus, the weights characterize geometric
properties independent of a particular frame of reference [63,64]. It also guarantees the linear approximation
property such that ŝ∗α is in the subspace span({ŝi }i∈Nk (sα ) ). When we further consider the non-negative constraint,
the approximation ŝ∗α is restricted to the convex hull E(sα ) (see Fig. 3). The imposed convexity and locality yields
enhanced robustness of linear regressions to outliers [64,70], and reduces numerical instability across different
clusters of neighbor points during data-driven iterations. Moreover, it is well known that the non-negative constraint
naturally imposes sparseness on the coefficient solution w ∗α . Lastly, by specifying the number of k-NN points, it
provides an opportunity to incorporate a priori knowledge about the experimental data structure and therefore,
enhance the robustness of data learning [64].
12 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Essentially, the modified local step of data-driven problem in (25) can be interpreted as a process of seeking
the data approximation based on the previously learnt low-dimensional manifold Eα associated with the given local
state. From a geometrical point of view, it searches the projection (i.e. the optimal local material data ŝ∗α ∈ Eα ) in
the associated convex set Eα . If sα locates inside Eα , the projection is represented by sα itself (Fig. 3a). Otherwise,
the local state is optimally projected on the convex hull Eα and the projection point is considered as the best
representative on the constitutive manifold (Fig. 3b).
In this section, a computationally feasible algorithm is developed to solve the local step minimization problem
in (24a) by relating it to the non-negative least squares (NNLS) problem that has been well established. The NNLS
problem is reviewed in Appendix A.
However, to solve the minimization problem (24a) under the NNLS framework, see (A.1), the partition of
unity constraint in (24a) needs to be properly handled. To this end, we propose to employ the quadratic penalty
method [71] to penalize the residuals of the partition of unity constraint in the auxiliary objective, and the modified
minimization problem becomes
w ∗α = arg min ∥sα − Ŝα w∥2M + ξ (1T w − 1)2 ,
w∈Rk (26)
subject to : wi ≥ 0, i = 1, . . . , k,
where ξ > 0 is a regularized coefficient to impose the associated constraint.
Note that to conform with the Euclidean metric used in the standard NNLS solver (A.1), the local state sα can
1/2 1/2
be easily rescaled to z α = M α sα (similarly, Ẑ α = M α Ŝα ) by using the relation given in (19). As a result, the
minimization problem (26) can be recast into a NNLS form as shown by augmenting the vector sα and the matrix
Ẑ α with additional components as follows
aug
α − Ẑ α w∥ ,
w ∗α = arg min ∥z aug 2
w∈Rk (27)
subject to : wi ≥ 0, i = 1, . . . , k,
where
[ ] [ ]
aug Ẑ zα
Ẑ α : = √α T ∈ R(2q+1)×k , z aug
α : = √ ∈ R2q+1 . (28)
ξ1 ξ
T
To properly impose the penalty term, we set ξ = ξ tr( Ẑ α Ẑ α )/k, where ξ is a large parameter (usually set as 104 –106 ).
With the weight solution w ∗α solved by NNLS algorithm, the reconstruction ŝ∗α can be obtained via (24b).
It is possible that constrained least squares in (27) could suffer from numerical instability due to rank deficiency
aug
when the number of neighbors is larger than the rank of the neighborhood, i.e. k > rank( Ẑ α ). As has been well
studied in machine learning field [5], a further regularization can be introduced to the NNLS problem. In this study,
a commonly used ridge regression [72], or called Tikhonov regularization, is applied to address the ill-posed issues,
and the NNLS problem (27) is modified as
aug
w ∗α = arg min ∥ Ẑ α w − z aug
α ∥ + µ∥w∥ ,
2 2
w∈Rk (29)
subject to : wi ≥ 0, i = 1, . . . , k,
where the regularized coefficient is
T
µ = µtr( Ẑ α Ẑ α )/k, (30)
Here µ is a small constant (set as 10−4 by default) such that the regularization has minor effect on the solution
w∗αand the reconstruction ŝ∗α . It is also shown that [5] this regularization imposes certain smoothness on the solution
and guarantees a unique solution.
Remark 3.2. As discussed in [64], the size range of k-NN points depends on various features of the data, such as
the manifold geometry and the sampling density. In principle, k should be greater than the dimensionality of the
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 13
Fig. 4. One-bar truss structure with the cross-section area A = 200 cm2 subjected to a uniaxial load F = 10 kN.
underlying manifold of the material data set E in order to explore the data structural and prevent overwhelming
influence from outliers/noise. Meanwhile, the resultant neighborhoods should be localized enough to ensure the
validity of locally linear approximation.
A simple algorithm for the proposed LCDD solver is shown as follows: Given a convergence tolerance TOL and
the material database E, then
1. Initialize ŝ∗(0)
α = [ε̂∗(0)T
α σ̂ ∗(0)T
α ]T , α = 1, . . . , m randomly, and v = 0.
∗(v)
2. While maxα=1,...,m ∥ŝα − ŝα ∗(v−1)
∥M > T O L
a Solve Eqs. (16), and output {s(ν) m
α }α=1
b Construct k-NN neighborhood Nk (s(ν) (ν)
α ) and Ŝα for each local state s α .
c Solve NNLS (27) (or (29)) by Algorithm 1, and use wα to output ŝα
∗ ∗(v+1)
via (24b).
d Update: ν ← ν + 1
3. Solution is sα = [εαT σ αT ]T ← s(ν) (ν)T (ν)T T
α = [ε α σ α ] , α = 1, . . . , m.
It has been shown that the Lawson–Hanson method [73,74] (Algorithm 1) used for solving NNLS converges in
a finite number of iterations less than the size of the output coefficient vector, which is the size of k-NN in LCDD.
In addition, considering the small size of the local matrix Ŝα ∈ R2q×k , k, q ≪ min(N , m), where N and m are the
numbers of discretization nodes and integration points, respectively, the additional computational cost in solving
the NNLS problem in (27) or (29) is negligible compared to solving the linear system (16).
This example examines data-driven computing in a single truss member (m = 1) when dealing with irregular
material data that exhibits noise and outliers. A truss member with the cross-section area A = 200 cm2 is subjected
to an axial load of F = 10 kN as shown in Fig. 4.
14 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 5. Comparison of the (a) DMDD and (b) LCDD solvers for the one-bar truss structure using a material database with mild Gaussian
random noise χ = 0.05. The database contains p = 100 stress–strain data points. The number of neighbor points used in LCDD is k = 6.
Fig. 6. Comparison of the (a) DMDD and (b) LCDD solvers for the one-bar truss structure using a material database with strong Gaussian
random noise χ = 0.15. The database contains p = 100 stress–strain data points. The number of neighbor points used in LCDD is k = 6.
even though no experiment data in E is exactly at those locations. This study indicates the advantage of LCDD in
forming an implicit local material graph (via the convex hull) for searching the optimal data points. This unique
feature allows LCDD to capture the local data structure, providing not only robustness against noise due to clustering
analysis, but also the reproducibility to a locally linear manifold if the data is well sampled, which will be further
discussed in the following examples.
Fig. 7. Comparison of the data-driven solvers, DMDD and LCDD, for the one-bar truss structure using a material database with an outlier.
The database contains p = 100 stress–strain data points. Different numbers of neighbor points k and regularization coefficient values µ are
used in LCDD.
iterations is nearly normal to the underlying material graph, the resulting displacement increment driven by the
force residual (see (17a)) is too small to move the computational stresses and strains toward other data points that
are closer to the physical manifold. As a result, the data-driven scheme converges at an undesirable solution. This
issue is attributed to the non-continuous nature of discrete data, resulting in the susceptibility of DMDD to the
selection of measure coefficient M, the associated metric norm used to measure distance in the phase space, and
the density and the underlying structure of data [55,61].
On the other hand, LCDD converges to a better solution (see Fig. 8b) at which the physical and material
manifolds intersect. This is because when using the LCDD solver the inherited locally convex approximation
represents a smooth constitutive (material) submanifold (i.e. the convex envelop) associated with the nonlinear
material behavior. Since the locally convex reconstruction resembles the manifold learning technique introduced
in [63] or local regression [75], LCDD is expected to reproduce a locally linear constitutive model corresponding
to the sampled data points.
It should be emphasized that this linear reproducibility is very attractive in dealing with higher-dimensional
phase space when data is relatively scarce, e.g., the elasticity problems in Section 5. As the reconstruction of local
convexity confines the solution space for searching optimum local data in a bounded smooth domain, the proposed
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 17
Fig. 8. Comparison of the (a) DMDD and (b) LCDD solvers for the one-bar truss structure using a sigmoid material database. The database
contains p = 100 stress–strain data points. The number of neighbor points used in LCDD is k = 6.
Fig. 9. A 15-bar truss structure with prescribed displacements and applied loads: a = 4 m, h = 2 m, u x = 0.01 m, and F = 100 kN.
LCDD approach also avoids the non-convergence issue during the data-driven iterations, which usually appears in
the regression based data-driven methods [47,56].
To examine the convergence behavior with respect to the material data set size, consider a 15-bar truss structure
m
(i.e., m = 15 for the local state vectors {sα = [εα σ α ]T }α=1 ) with unity cross-sectional area, as illustrated in Fig. 9.
The solutions obtained from different data-driven solvers are compared against the reference solution using the
following normalized root-mean-square (%RMS) state errors
( m
)1/2
1 1 ∑
ε(%RMS) = ref lα (εα − εα )
ref 2
, (33a)
εmax m α=1
( m
)1/2
1 1 ∑
σ(%RMS) = ref lα (σα − σα )
ref 2
, (33b)
σmax m α=1
where {lα }m α=1 are the length of the bars, {(εα , σα )}α=1 are the data-driven solutions for all bar members,
m
{(εα , σα )}α=1 are the strain and stress reference solutions corresponding synthetic material model, and (εmax
ref ref m ref
, σmax
ref
)
are the largest absolute values of strain and stress among all bar members.
In this numerical study, we consider three material data sets (see Fig. 10) with different sizes (i.e. p =
102 , 103 , 104 ), where the database with more data points is said to have higher density or larger data size, for
the data-driven simulations. These noisy data sets are superimposed with the strain and stress perturbations given
by the random Gaussian noise χ = 2 p −1 (refer to (32)). In this case, the underlying structure of the data set
uniformly converges to a linear curve with a slope of M = 100 MPa as the number of data points increases.
18 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 10. Three noisy databases with different sizes, p = 102 , 103 , 104 , where the random level of noise χ corresponding to each database
is inverse to the data size, defined as χ = 2 p −1 .
Fig. 11. Truss structure case. The convergence plot of the normalized RMS errors, ε(%RMS) and σ (%RMS) , against increasing the size of
database. The DMDD and LCDD solvers are compared using the three noisy data sets with different sizes, p = 102 , 103 , 104 . The number
of neighbor points used in LCDD is k = 12.
The convergence results of different data-driven solvers (DMDD and LCDD) measured by the normalized RMS
state errors in (33) are shown in Figs. 11 and 12. As suggested by the estimate given in [41,61], the data-driven
solutions obtained by both methods converge toward the classical model-based solution with a rate close to 1 as
the number of data points increases. However, a less satisfactory result is obtained by DMDD compared to LCDD.
In Fig. 11, where LCDD yields nearly 1 order of accuracy higher than DMDD. This is due to the locally convex
reconstruction that recovers the locally linear manifold. In LCDD, the inherent manifold learning ability contributes
to the improved accuracy in addition to the enhancement of robustness against noisy data.
It is also observed that the performance of the proposed LCDD solver appears to be insensitive to the numbers
of convex hull neighbors (from k = 6 to k = 18) as demonstrated by its solution errors (Fig. 12a) and the number
of iterations to attain convergence (Fig. 12b). Surprisingly, the results in Fig. 12b suggest that the LCDD solution
converges faster as the data set size increases. This phenomenon is significantly distinct from other data-driven
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 19
Fig. 12. Truss structure case. (a) The convergence plot of the normalized RMS strain error ε (%RMS) and (b) the number of iterations for
convergence against increasing the size of database. The DMDD and LCDD solvers are compared using the noisy data sets with three
different sizes, p = 102 , 103 , 104 . Different number of neighbor points k are used in LCDD.
solvers, such as DMDD [41] and the max-ent data-driven solver [42], which require more iterations to achieve
convergence when using larger data sets. We believe this is because the local manifold learning of LCDD better
represents the underlying manifold. It is worth noting that when using k = 2, LCDD appears to lose accuracy and
yields solutions approaching DMDD (shown in Fig. 12a), implying that LCDD would recover DMDD in the limit
of using one neighbor.
A close comparison between the data-driven solutions of DMDD and LCDD using the data set with 100 material
points (see Fig. 10) is given in Fig. 13. The reference solution (denoted by the diamond points) is obtained by
utilizing the synthetic linear model (i.e. σ = Mε). As can be seen from Fig. 13a, the variations of the noisy
data set substantially influence the DMDD performance such that several data-driven solutions (the asterisk points
in a dashed box) converge poorly at some local minima that deviate from the linear graph, resulting in an overall
unsatisfying performance of DMDD. In contrast, the LCDD solver overcomes such issues with noisy data as shown
in Fig. 13b.
To evaluate the performance of the data-driven solvers, the following normalized root-mean-square (%RMS) state
error is defined for high-dimensional state,
(∑ )1
N ref 2 2
α=1 Vα ∥s α − s α M∥
ω(%RMS) = ∑N , (35)
ref 2
α=1 Vα ∥s α ∥ M
20 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 13. Comparison of the data-driven solutions of (a) DMDD and (b) LCDD for the truss structure using the noisy material data set of
p = 100 stress–strain data points. The number of neighbor points used in LCDD is k = 12.
α = [ε α σ α ] denotes the nodal strain and stress reference solutions solved by using the synthetic
where sref refT refT T
T
material model, while sα = [εαT σ αT ] denotes the solutions solved by data-driven solvers using a given material
data set.
Following the procedures of data generation in (32), the noiseless stress–strain data points using synthetic
elastic material model in (34) are first generated, where the strains ε x , ε y and γx y are defined within the range
[−5 × 10−4 , 5 × 10−4 ]. The noisy data set is generated by superimposing the three components of noiseless strain
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 21
Fig. 15. Schematics of (a) Voronoi diagram and (b) the discretization of the beam model. The RK nodes are also the integration points
under the SCNI framework.
(√
and stress data, respectively, with the associated Gaussian noise term defined in (32), where χ = 0.4/ 3 pl( j) and
)
l( j) is the maximum value associated to jth component of the noiseless data. Four material data sets in various size
(i.e., p = 103 , 203 , 403 , 803 ) are considered for the beam model.
The performance of the data-driven solvers using the noiseless data sets and the noisy data sets are given in
Figs. 16 and 17, respectively. Consistent to the convergence estimate in [41], the DMDD solutions converge linearly
to the reference solution against the cubic root of the number of data points, regardless of using noiseless or noisy
databases. LCDD using noiseless data sets (Fig. 16a) generates data-driven solution with the error as small as the
convergence tolerance in the iterative data-driven computing (see Section 3.3). This implies that LCDD perfectly
captures the underlying linear material graph even in such a high dimensional phase space. The convergence study
with noisy data sets (Fig. 17a) shows that the LCDD solution using a sparse data set ( p = 103 ) is able to achieve
higher accuracy than the DMDD solution obtained using a very dense data set ( p = 803 ), suggesting the superiority
of LCDD over DMDD. Considering that it is difficult, in practice, to obtain a database with sufficiently dense data
for high-dimensional spaces, the proposed LCDD approach is attractive.
As the intrinsic dimensionality of the employed linear elastic database is d = 2, it is interesting to observe from
Figs. 16a and 17a that the LCDD solutions obtained by using k = 3 (d < k < 2q, where 2q is the dimension of the
material dataset E) present an intermediate solution between the DMDD solution (i.e. k = 1) and the other LCDD
solutions with using more neighbor points k ≥ 2q = 6. The results indicate the importance of including enough
neighbors in convex hull to fully preserve the manifold learning capacity in LCDD. In the case of noiseless data
we observe that LCDD with k = 6 is sufficient as its results are almost identical to the case with k = 9 results (see
Fig. 16).
The associated number of convergence steps for the data-driven solvers are also presented in Figs. 16b and 17b.
In contrast to the DMDD solver where the number of iterations increases with using a larger database, there is no
evident increase for the LCDD solver. Moreover, the comparison of Figs. 16b and 17b shows that LCDD does not
require more iterations between the local and global steps to converge when dealing with the noisy database rather
than the noiseless database. It suggests that the convergence of LCDD is not sensitive to the size of database as
well as the data sampling quality.
The beam deformations simulated by the data-driven solvers are also compared in Figs. 18 and 19. It is observed
that DMDD performs poorly (Figs. 18a and 19a) due to the susceptibility to noisy data and local minimum issues that
are more pronounced in elasticity problems with a high-dimensional phase space. On the other hand, LCDD exactly
reproduces the reference solutions when using noiseless database (Fig. 18b), and results in marginal deviations from
the reference when using the noisy database (Fig. 19b). Moreover, Fig. 20 shows the stress solutions, σx x and σx y ,
obtained by LCDD compared to the constitutive model-based reference solutions. It shows that LCDD can yield
accurate stress solutions across the problem domain with using a noisy database. This demonstrates that the LCDD
approach remains robust with noisy data in solving elasticity problems.
22 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 16. Shear beam model with noiseless data sets. (a) The convergence plot of the normalized RMS state error ω(%RMS) and (b) the number
of iterations for convergence against increasing the size of database. The DMDD and LCDD solvers are compared using the noiseless data
sets with four different sizes, p = 103 , 203 , 403 , 803 . Different numbers of neighbor points k are used in LCDD.
Fig. 17. Shear beam model with noisy data sets. (a) The convergence plot of the normalized RMS state error ω(%RMS) and (b) the number
of iterations for convergence against increasing the size of database. The DMDD and LCDD solvers are compared using the noisy data sets
with four different sizes, p = 103 , 203 , 403 , 803 . Different numbers of neighbor points k are used in LCDD.
Fig. 18. Comparison of the data-driven displacement solutions for the shear beam model by using (a) DMDD with a noiseless data set of
p = 803 stress–strain data points and (b) LCDD with a noiseless data set of p = 103 stress–strain data points. The number of neighbor
points used in LCDD is k = 6.
Fig. 19. Comparison of the data-driven displacement solutions for the shear beam model by using (a) DMDD with a noisy data set of
p = 803 stress–strain data points and (b) LCDD with a noisy data set of p = 103 stress–strain data points. The number of neighbor points
used in LCDD is k = 6.
prevent undesirable local minima. LCDD can be reduced to the standard DMDD approach when using only one
neighbor data point, i.e. k = 1. Thus, LCDD retains the simplicity and is computationally efficiency compared
to other robustness enhanced data-driven methods with introducing statistics information [42,55]. From the fitted
data-driven (or linearization) approach point of view, on the other hand, LCDD relies on the approximation of
locally linear material graph by the manifold learning methodologies [21,64] to capture the global structure via local
data information. However, the proposed LCDD scheme distinguishes itself from the other manifold learning based
data-driven approaches [47,56] with the following two aspects: first, the iteration process of LCDD does not need
to explicitly construct constitutive manifold and use tangent information; second, LCDD introduces the convexity
condition on the reconstructed material graph, thus avoiding convergence issues that occur when using standard
regression approaches. In addition, we believe preserving convexity is of physical importance. For example, it is
expected to better preserve the positivity of strain energy via LCDD rather than other manifold learning techniques
because the proposed approach learns the underlying constitutive manifold based on a convex combination of a
cluster of neighboring data points, although a rigorous analysis of this question is beyond the scope of this paper.
24 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
Fig. 20. Comparison of the stress solutions, σ x x and σ x y , of the reference constitutive model-based computing (upper) and the proposed
LCDD computing (bottom) for the shear beam model. A noisy data set of p = 103 stress–strain data points and k = 6 neighbor points are
used in LCDD.
It has also been shown that the embedded NNLS solver seeks the projection point of a given computational
state onto a nearby material graph implicitly constructed based on the k-NN points, which ensures local linear
reproducibility in the approximation. Hence, in addition to the improved robustness and accuracy in dealing with
noisy database, LCDD exactly represent the underlying linear stress–strain relationship. In the proposed global–local
data-driven algorithm, smooth solution spaces are employed for the global physical solution and the local project
on the convex set. This is achieved by introducing a RK shape functions in the approximation of the global physics
laws and the regularized LCDD learning solver in the optimal data set search. With the SCNI domain integration
employed in the global Galerkin equations, it significantly reduces the needed stress–strain data search in the local
LCDD learning solver, leading to an effective data-driven computing. The proposed LCDD data-driven method has
been applied to truss problems with linear and nonlinear stress–strain relationship, and continuum elasticity problems
and demonstrated its effectiveness in robustness, convergence, accuracy in high-dimensional phase spaces.
This paper is intended to introduce manifold learning techniques, or dimensionality reduction [21], to data-
driven computing. Our numerical studies show that it is effective in applying manifold learning for problems with
high-dimensional data, because in high-dimensional spaces the data can be extremely sparse and the acquisition of
sufficient data is not practical. This demands effective dimensionality reduction to identify and extract the essential
information from the database, and elasticity example in Section 5 demonstrates the suitability of LCDD with
inherent manifold learning for such problems.
Although manifold learning has been shown to enhance the convergence performance during data-driven iteration,
the computational cost of each local step remains linearly scaled with the size of material database. Thus,
when the datasets become large with high-dimensional information, such as time-dependent states [43], inelastic
quantities [35,50–52], and so on, more advanced machine learning models for manifold learning like autoencoder
can be employed. This is the direction of our further research.
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 25
Acknowledgment
The support of this work by the National Science Foundation under Award Number CCF-1564302 to University
of California, San Diego, is greatly appreciated.
The reproducing kernel (RK) shape functions {Ψ I } NI=1 used for approximating displacement and Lagrange
multipliers in (14) are expressed as
Ψ I (x) = C(x; x − x I )Φa (x − x I ) (B.1)
The kernel function Φa defines a local support for the shape functions by a support size “a” as well as the
smoothness of the approximation. A widely used kernel function is the cubic-B splines that provides C 2 continuity,
expressed as
for 0 ≤ z < 1/2
⎧
⎨2/3 − 4z 2 + 4z 3
Φa (x − x I ) = Φa (z) = 4/3 − 4z + 4z 2 − 4/3z 3 for1/2 ≤ z < 1 (B.2)
0 for z ≥ 1
⎩
where z = ∥x − x I ∥/a. The term C(x; x − x I ) is a correction function constructed using a set of basis functions,
n
∑
C(x; x − x I ) = (x1 − x1I )i (x2 − x2I ) j (x3 − x3I )k bi jk (x) = H T (x − x I )b(x) (B.3)
i+ j+k=0
in which H(x − x I ) is a vector consisting of all the monomial basis functions up to nth order, and b is an unknown
parameter vector determined by enforcing the nth order reproducing conditions as follows,
N
∑ j j
i
Ψ I (x)x1I k
x2I x3I = x1i x2 x3k , |i + j + k| = 0, 1, . . . , n (B.4)
I =1
Introducing Eqs. (B.1) and (B.3) into (B.4), the coefficient vector can be obtained by
b(x) = M −1 (x)H(0) (B.5)
where the moment matrix is
N
∑
M(x) = H(x − x I )H T (x − x I )Φa (x − x I ) (B.6)
I =1
26 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
The SCNI approach is employed for the domain integration of the weak form (13) to achieve computational
efficiency and accuracy when using meshfree shape functions with nodal integration quadrature schemes.
The key idea behind SCNI is to satisfy the linear patch test (thus, ensure the linear consistency) by leveraging
a condition, i.e. the divergence constraint on the test function space and numerical integration [60], expressed as:
∫ ∧ ∫ ∧
∇Ψ I dΩ = Ψ I ndΓ , (B.8)
Ω ∂Ω
where ‘∧’ over the integral symbol denotes numerical integration. In SCNI, an effective way to achieve Eq. (B.8)
is based on nodal integration with gradients smoothed over conforming representative nodal domains, as shown in
Fig. 21, converted to boundary integration using the divergence theorem
∫ ∫
1 1
∇Ψ I (x L ) =
˜ ∇Ψ I dΩ = Ψ I ndΓ , (B.9)
VL Ω L VL ∂Ω L
∫
where VL = Ω L dΩ is the volume of a conforming smoothing domain associated to the node x L , and ∇ ˜ denotes
the smoothed gradient operator. In this method, smoothed gradients are employed for both test and trial functions,
as the approximation in (B.9) enjoys first order completeness and leads to a quadratic rate of convergence for
solving linear solid problems by meshfree Galerkin methods. As shown in Fig. 21, the continuum domain Ω is
partitioned into N conforming cells by Voronoi diagram, and both the nodal displacement vectors and the state
N
variables (e.g., stress, strain) are defined at the set of nodes at {x L } L=1 .
Therefore, if we consider two-dimensional elasticity problem under the SCNI framework, the smoothed
strain–displacement matrix B̃ I (x L ) used in (16) is expressed as:
⎡ ⎤
b̃ I 1 (x L ) 0
B̃ I (x L ) = ⎣ 0 b̃ I 2 (x L )⎦ , (B.10)
b̃ I 2 (x L ) b̃ I 1 (x L )
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 27
∫
1
b̃ I i (x L ) = Ψ I (x)n i (x)dΓ . (B.11)
VL ∂Ω L
Since the employment of the smoothed gradient operator in (B.9) and (B.11) satisfies the divergence constraint
regardless of the numerical boundary integration, a two trapezoidal rule for each segment of ∂Ω L is used in this
study.
References
[1] F. Darema, Dynamic data driven applications systems: A new paradigm for application simulations and measurements, in: Int. Conf.
Comput. Sci, 2004, pp. 662–669.
[2] S. Baker, J. Berger, P. Brady, K. Borne, S. Glotzer, R.H.D. Johnson, A. Karr, D. Keyes, B. Pate, H. Prosper, Data-enabled science in
the mathematical and physical sciences, 2010.
[3] J.N. Kutz, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data, Oxford University Press,
2013.
[4] D.T. Larose, Discovering Knowledge in Data: An Introduction to Data Mining, John Wiley & Sons, 2014, http://dx.doi.org/10.1016/j.
cll.2007.10.008.
[5] T. Hastie, R. Tibshirani, J. Friedman, The Elements of Statistical Learning, in: Springer series in statistics, New York, 2009,
http://dx.doi.org/10.1007/b94608.
[6] K.P. Murphy, Machine Learning: A Probabilistic Perspective, MIT press, Cambridge, Cambridge, 2012, http://dx.doi.org/10.1007/978-
3-642-21004-4_10.
[7] C. Angermueller, T. Pärnamaa, L. Parts, O. Stegle, Deep learning for computational biology, Mol. Syst. Biol. (2016) http://dx.doi.org/
10.15252/msb.20156651.
[8] I. Kononenko, Machine learning for medical diagnosis: History, state of the art and perspective, Artif. Intell. Med. (2001) http:
//dx.doi.org/10.1016/S0933-3657(01)00077-X.
[9] K. Rajan, Materials informatics, Mater. Today 8 (2005) 38–45.
[10] R. Ramprasad, R. Batra, G. Pilania, A. Mannodi-Kanakkithodi, C. Kim, Machine learning in materials informatics: Recent applications
and prospects, Npj Comput. Mater. (2017) http://dx.doi.org/10.1038/s41524-017-0056-5.
[11] Mckinsey & company, Big data: The next frontier for innovation, competition, and productivity, McKinsey Glob. Inst. (2011)
http://dx.doi.org/10.1080/01443610903114527.
[12] J. Ghaboussi, J.H. Garrett, X. Wu, Knowledge-based modeling of material behavior with neural networks, J. Eng. Mech. 117 (1991)
132–153, http://dx.doi.org/10.1061/(ASCE)0733-9399(1991)117:1(132).
[13] J. Ghaboussi, D.E. Sidarta, New nested adaptive neural networks (NANN) for constitutive modeling, Comput. Geotech. 22 (1998)
29–52, http://dx.doi.org/10.1016/S0266-352X(97)00034-7.
[14] M. Lefik, B.A. Schrefler, Artificial neural network as an incremental non-linear constitutive model for a finite element code, Comput.
Methods Appl. Mech. Engrg. 192 (2003) 3265–3283, http://dx.doi.org/10.1016/S0045-7825(03)00350-5.
[15] M. Milano, P. Koumoutsakos, Neural network modeling for near wall turbulent flow, J. Comput. Phys. 182 (2002) 1–26, http:
//dx.doi.org/10.1006/jcph.2002.7146.
[16] B.D. Tracey, K. Duraisamy, J.J. Alonso, A machine learning strategy to assist turbulence model development, in: 53rd AIAA Aerosp.
Sci. Meet., 2015, http://dx.doi.org/10.2514/6.2015-1287.
[17] Z.J. Zhang, K. Duraisamy, Machine learning methods for data-driven turbulence modeling, in: 22nd AIAA Comput. Fluid Dyn. Conf.,
2015, http://dx.doi.org/10.2514/6.2015-2460.
[18] M. Schmidt, H. Lipson, Distilling free-form natural laws from experimental data, Science 324 (5923) (2009) 81–85, http://dx.doi.org/
10.1126/science.1165893.
[19] S.L. Brunton, J.L. Proctor, J.N. Kutz, Discovering governing equations from data: Sparse identification of nonlinear dynamical systems,
Proc. Natl. Acad. Sci. 113 (2016) 3932–3937, http://dx.doi.org/10.1073/pnas.1517384113.
[20] M. Raissi, G.E. Karniadakis, Hidden physics models: Machine learning of nonlinear partial differential equations, J. Comput. Phys.
357 (2018) 125–141, http://dx.doi.org/10.1016/j.jcp.2017.11.039.
[21] J.A. Lee, M. Verleysen, Nonlinear Dimensionality Reduction, Springer Science & Business Media, 2007, http://dx.doi.org/10.1109/
TNN.2008.2005582.
[22] S. Haykin, Neural Networks and Learning Machines, 3/E. Pearson Education India, 2010.
[23] B.A. Le, J. Yvonnet, Q.-C. He, Computational homogenization of nonlinear elastic materials using neural networks, Internat. J. Numer.
Methods Engrg. 104 (2015) 1061–1084, http://dx.doi.org/10.1002/nme.4953.
[24] S. Bhattacharjee, K. Matouš, A nonlinear manifold-based reduced order model for multiscale analysis of heterogeneous hyperelastic
materials, J. Comput. Phys. 313 (2016) 635–653, http://dx.doi.org/10.1016/j.jcp.2016.01.040.
[25] M.A. Bessa, R. Bostanabad, Z. Liu, A. Hu, D.W. Apley, C. Brinson, W. Chen, W.K. Liu, A framework for data-driven analysis of
materials under uncertainty: Countering the curse of dimensionality, Comput. Methods Appl. Mech. Engrg. (2017) http://dx.doi.org/10.
1016/j.cma.2017.03.037.
[26] G.B. Olson, Designing a new material world, Science 288 (5468) (2000) 993–998, http://dx.doi.org/10.1126/science.288.5468.993.
[27] K. Wang, W.C. Sun, A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning,
Comput. Methods Appl. Mech. Engrg. (2018) http://dx.doi.org/10.1016/j.cma.2018.01.036.
[28] A. Oishi, G. Yagawa, Computational mechanics enhanced by deep learning, Comput. Methods Appl. Mech. Engrg. (2017) http:
//dx.doi.org/10.1016/j.cma.2017.08.040.
28 Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791
[29] D. Stoecklein, K.G. Lore, M. Davies, S. Sarkar, B. Ganapathysubramanian, Deep learning for flow sculpting: Insights into efficient
learning using scientific simulation data, Sci. Rep. (2017) http://dx.doi.org/10.1038/srep46368.
[30] J.T. Oden, T. Belytschko, J. Fish, T.J.R. Hughes, C. Johnson, D. Keyes, A. Laub, L. Petzold, D. Srolovitz, S. Yip, Simulation-based
engineering science: Revolutionizing engineering science through simulation, Natl. Sci. Found. (2006) 1–88.
[31] R. Ibañez, D. Borzacchiello, J.V. Aguado, E. Abisset-Chavanne, E. Cueto, P. Ladeveze, F. Chinesta, Data-driven non-linear elasticity:
constitutive manifold construction and problem discretization, Comput. Mech. (2017) 1–14, http://dx.doi.org/10.1007/s00466-017-1440-
1.
[32] J. Ling, R. Jones, J. Templeton, Machine learning strategies for systems with invariance properties, J. Comput. Phys. 318 (2016) 22–35,
http://dx.doi.org/10.1016/j.jcp.2016.05.003.
[33] M. Raissi, P. Perdikaris, G.E. Karniadakis, Physics informed deep learning (Part I): Data-driven discovery of nonlinear partial differential
equations, 2017, pp. 1–22, http://dx.doi.org/10.1103/PhysRevE.87.030803.
[34] A. Koscianski, S. De Cursi, Physically constrained neural networks and regularization of inverse problems, in: 6 th World Congr.
Struct. Multidiscip. Optim., 2005.
[35] R. Ibáñez, E. Abisset-Chavanne, D. González, J.L. Duval, E. Cueto, F. Chinesta, Hybrid constitutive modeling: data-driven learning of
corrections to plasticity models, Int. J. Mater. Form. (2018) http://dx.doi.org/10.1007/s12289-018-1448-x.
[36] G. Evensen, Data Assimilation: The Ensemble Kalman Filter, Springer Science & Business Media, 2010, http://dx.doi.org/10.1007/978-
3-540-38301-7.
[37] Z. Liu, M.A. Bessa, W.K. Liu, Self-consistent clustering analysis: An efficient multi-scale scheme for inelastic heterogeneous materials,
Comput. Methods Appl. Mech. Engrg. (2016) http://dx.doi.org/10.1016/j.cma.2016.04.004.
[38] K. Matouš, M.G.D. Geers, V.G. Kouznetsova, A. Gillman, A review of predictive nonlinear theories for multiscale modeling of
heterogeneous materials, J. Comput. Phys. 330 (2017) 192–220.
[39] B. Peherstorfer, K. Willcox, Dynamic data-driven reduced-order models, Comput. Methods Appl. Mech. Engrg. 291 (2015) 21–41,
http://dx.doi.org/10.1016/j.cma.2015.03.018.
[40] Q. He, J.S. Chen, C. Marodon, A decomposed subspace reduction for fracture mechanics based on the meshfree integrated singular
basis function method, Comput. Mech. 63 (2019) 593–614, http://dx.doi.org/10.1007/s00466-018-1611-8.
[41] T. Kirchdoerfer, M. Ortiz, Data-driven computational mechanics, Comput. Methods Appl. Mech. Engrg. 304 (2016) 81–101, http:
//dx.doi.org/10.1016/j.cma.2016.02.001.
[42] T. Kirchdoerfer, M. Ortiz, Data driven computing with noisy material data sets, Comput. Methods Appl. Mech. Engrg. 326 (2017)
622–641, http://dx.doi.org/10.1016/j.cma.2017.07.039.
[43] T. Kirchdoerfer, M. Ortiz, Data-driven computing in dynamics, Internat. J. Numer. Methods Engrg. 113 (2018) 1697–1710, http:
//dx.doi.org/10.1002/nme.5716.
[44] M. Bonne, A. Constantinescu, Inverse problems in elasticity, Inverse Problems 21 (2005) R1–R50, http://dx.doi.org/10.1088/0266-
5611/21/2/R01.
[45] S. Avril, M. Bonnet, A.S. Bretelle, M. Grédiac, F. Hild, P. Ienny, F. Latourte, D. Lemosse, S. Pagano, E. Pagnacco, F. Pierron,
Overview of identification methods of mechanical parameters based on full-field measurements, Exp. Mech. 48 (2008) 381–402,
http://dx.doi.org/10.1007/s11340-008-9148-y.
[46] M. Ben Azzouna, P. Feissel, P. Villon, Robust identification of elastic properties using the modified constitutive relation error, Comput.
Methods Appl. Mech. Engrg. 295 (2015) 196–218, http://dx.doi.org/10.1016/j.cma.2015.04.004.
[47] R. Ibañez, E. Abisset-Chavanne, J.V. Aguado, D. Gonzalez, E. Cueto, F. Chinesta, Manifold learning approach to data-driven
computational elasticity and inelasticity, Arch. Comput. Methods Eng. (2016) 1–11, http://dx.doi.org/10.1007/s11831-016-9197-9.
[48] P. Ladevèze, The large time increment method for the analysis of structures with non-linear behavior described by internal variables,
C. R. Acad. Des. Sci. Ser. II. 309 (1989) 1095–1099.
[49] J.C. Simo, N. Tarnow, M. Doblare, Non-linear dynamics of three-dimensional rods: Exact energy and momentum conserving algorithms,
Internat. J. Numer. Methods Engrg. (1995) http://dx.doi.org/10.1002/nme.1620380903.
[50] L.T.K. Nguyen, M.-A. Keip, A data-driven approach to nonlinear elasticity, Comput. Struct. 194 (2018) 97–115.
[51] D. González, F. Chinesta, E. Cueto, Thermodynamically consistent data-driven computational mechanics, Contin. Mech. Thermodyn.
(2018) 1–15, http://dx.doi.org/10.1007/s00161-018-0677-z.
[52] R. Eggersmann, T. Kirchdoerfer, S. Reese, L. Stainier, M. Ortiz, Model-free data-driven inelasticity, Comput. Methods Appl. Mech.
Engrg. 350 (2019) 81–99.
[53] A. Leygue, M. Coret, J. Réthoré, L. Stainier, E. Verron, Data-based derivation of material response, Comput. Methods Appl. Mech.
Engrg. 331 (2018) 184–196.
[54] J. Ayensa-Jiménez, M.H. Doweidar, J.A. Sanz-Herrera, M. Doblaré, An unsupervised data completion method for physically-based
data-driven models, Comput. Methods Appl. Mech. Engrg. 344 (2019) 120–143.
[55] J. Ayensa-Jiménez, M.H. Doweidar, J.A. Sanz-Herrera, M. Doblaré, A new reliability-based data-driven approach for noisy experimental
data with physical constraints, Comput. Methods Appl. Mech. Engrg. 328 (2018) 752–774, http://dx.doi.org/10.1016/j.cma.2017.08.027.
[56] Y. Kanno, Simple heuristic for data-driven computational elasticity with material data involving noise and outliers: a local robust
regression approach, Jpn. J. Ind. Appl. Math. (2018) http://dx.doi.org/10.1007/s13160-018-0323-y.
[57] C.L. Lawson, R.J. Hanson, Solving Least Squares Problems, Society for Industrial and Applied Mathematics, 1987, http://dx.doi.org/
10.1137/1.9781611971217.
[58] W.K. Liu, S. Jun, Y.F. Zhang, Reproducing kernel particle methods, Int. J. Numer. Methods Fluids 20 (1995) 1081–1106, http:
//dx.doi.org/10.1002/fld.1650200824.
[59] J.-S. Chen, C. Pan, C.T. Wu, W.K. Liu, Reproducing kernel particle methods for large deformation analysis of non-linear structures,
Comput. Methods Appl. Mech. Engrg. 139 (1996) 195–227, http://dx.doi.org/10.1016/S0045-7825(96)01083-3.
Q. He and J.-S. Chen / Computer Methods in Applied Mechanics and Engineering 363 (2020) 112791 29
[60] J.-S. Chen, C.-T. Wu, S. Yoon, Y. You, A stabilized conforming nodal integration for Galerkin mesh-free methods, Internat. J. Numer.
Methods Engrg. 50 (2001) 435–466, http://dx.doi.org/10.1002/1097-0207(20010120)50:2<435::aid-nme32>3.0.co;2-a.
[61] S. Conti, S. Müller, M. Ortiz, Data-driven problems in elasticity, Arch. Ration. Mech. Anal. 229 (2018) 79–123, http://dx.doi.org/10.
1007/s00205-017-1214-0.
[62] C.A. Felippa, A survey of parametrized variational principles and applications to computational mechanics, Comput. Methods Appl.
Mech. Engrg. 113 (1–2) (1994) 109–139, http://dx.doi.org/10.1016/0045-7825(94)90214-3.
[63] S.T. Roweis, L.K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (80) (2000) 2323–2326,
http://dx.doi.org/10.1126/science.290.5500.2323.
[64] L.K.L. Saul, S.S.T. Roweis, Think globally, fit locally: unsupervised learning of low dimensional manifolds, J. Mach. Learn. Res. 4
(2003) 119–155, http://dx.doi.org/10.1162/153244304322972667.
[65] J.B. Tenenbaum, V. de Silva, J.C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science 290 (2000)
2319–2323, http://dx.doi.org/10.1126/science.290.5500.2319.
[66] M. Belkin, P. Niyogi, Laplacian eigenmaps and spectral techniques for embedding and clustering, Nips 14 (2001) 585–591.
[67] Z. Zhang, H. Zha, Principal manifolds and nonlinear dimensionality reduction via tangent space alignment, SIAM J. Sci. Comput. 26
(2004) 313–338.
[68] Y. Bengio, J.-F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, M. Ouimet, Out-of-sample extensions for lle, isomap, mds, eigenmaps,
and spectral clustering, in: Adv. Neural Inf. Process. Syst., 2003, pp. 177–184.
[69] E. Vural, C. Guillemot, Out-of-sample generalizations for supervised manifold learning for classification, IEEE Trans. Image Process.
25 (2016) 1410–1424.
[70] H. Cevikalp, W. Triggs, Face recognition based on image sets, in: Comput. Vis. Pattern Recognit. IEEE Conference, 2010, pp.
2567–2573, http://dx.doi.org/10.1109/CVPR.2010.5539965.
[71] S. Boyd, L. Vandenberghe, Convex Optimization, Cambridge university press, 2004.
[72] A.E. Hoerl, R.W. Kennard, Ridge regression: Biased estimation for nonorthogonal problems, Technometrics 12 (1970) 55–67,
http://dx.doi.org/10.1080/00401706.1970.10488634.
[73] J.A. Tropp, A.C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Trans. Inf. Theory 53
(2007) 4655–4666, http://dx.doi.org/10.1109/TIT.2007.909108.
[74] M. Yaghoobi, M.E. Davies, Fast non-negative orthogonal least squares, in: 2015 23rd Eur. Signal Process. Conf., EUSIPCO 2015,
2015, pp. 479–483, http://dx.doi.org/10.1109/EUSIPCO.2015.7362429.
[75] W.S. Cleveland, S.J. Devlin, Locally weighted regression: An approach to regression analysis by local fitting, J. Amer. Statist. Assoc.
(1988) http://dx.doi.org/10.1080/01621459.1988.10478639.
[76] D. Chen, R.J. Plemmons, Nonnegativity Constraints in Numerical Analysis, in: Birth Numer. Anal., World Scientific, 2010, pp. 109–139,
http://dx.doi.org/10.1142/9789812836267_0008.
[77] P.E. Gill, W. Murray, M.H. Wright, Practical Optimization, Emerald Group Publishing Limited, 1981.
[78] T. Belytschko, Y.Y. Lu, L. Gu, Element-free Galerkin methods, Internat. J. Numer. Methods Engrg. 37 (1994) 229–256.
[79] J. Nitsche, Über ein Variationsprinzip zur Lösung von Dirichlet-Problemen bei Verwendung von Teilräumen, die keinen
Randbedingungen unterworfen sind, Abh. Math. Semin. Univ. Hambg. 36 (1971) 9–15, http://dx.doi.org/10.1007/BF02995904.
[80] S. Fernández-Méndez, A. Huerta, Imposing essential boundary conditions in mesh-free methods, Comput. Methods Appl. Mech. Engrg.
193 (2004) 1257–1275, http://dx.doi.org/10.1016/j.cma.2003.12.019.
[81] J.-S. Chen, H.P. Wang, New boundary condition treatments in meshfree computation of contact problems, Comput. Methods Appl.
Mech. Engrg. 187 (2000) 441–468.
[82] J.-S. Chen, M. Hillman, S.-W. Chi, Meshfree methods: Progress made after 20 years, J. Eng. Mech. 143 (2017) 04017001,
http://dx.doi.org/10.1061/(ASCE)EM.1943-7889.0001176.