A Local Radial Basis Function Method For The Numerical Solution o
A Local Radial Basis Function Method For The Numerical Solution o
1-1-2012
Recommended Citation
Chenoweth, Maggie Elizabeth, "A Local Radial Basis Function Method for the Numerical Solution of Partial Differential Equations"
(2012). Theses, Dissertations and Capstones. Paper 243.
This Thesis is brought to you for free and open access by Marshall Digital Scholar. It has been accepted for inclusion in Theses, Dissertations and
Capstones by an authorized administrator of Marshall Digital Scholar. For more information, please contact [email protected].
A LOCAL RADIAL BASIS FUNCTION METHOD FOR THE
NUMERICAL SOLUTION OF PARTIAL DIFFERENTIAL
EQUATIONS
A thesis submitted to
Marshall University
In partial fulfillment of
by
Approved by
Marshall University
May 2012
Copyright by
2012
ii
ACKNOWLEDGMENTS
I would like to begin by expressing my sincerest appreciation to my thesis advisor, Dr. Scott
Sarra. His knowledge and expertise have guided me during my research endeavors and the
process of writing this thesis. Dr. Sarra has also served as my teaching mentor, and I am
grateful for all of his encouragement and advice. It has been an honor to work with him.
I would also like to thank the other members of my thesis committee, Dr. Anna Mummert
and Dr. Carl Mummert. Their feedback and counsel have been exceedingly beneficial while
finalizing my thesis. The leadership of the chair of the Mathematics Department, Dr. Alfred
Akinsete, and formerly Dr. Ralph Oberste-Vorth, as well as the guidance of the graduate
advisor, Dr. Bonita Lawrence, have been outstanding. I am lucky to have been a teaching
If it had it not been for numerous teachers, I would not be where I am today. Specif-
ically, I would like to thank the professors I had the privilege of taking mathematics
courses from while at Marshall: Dr. John Drost, Dr. Judith Silver, Dr. Karen Mitchell, Dr.
Ariyadasa Aluthge, Dr. Evelyn Pupplo-Cody, Dr. Yulia Dementieva, Dr. Scott Sarra, Dr.
Ralph Oberste-Vorth, Dr. Anna Mummert, Dr. Carl Mummert, and Dr. Bonita Lawrence.
You have taught me not only mathematics, but a love for a subject that I hope to instill in
Finally, I would like to thank my family and friends for all of their continued love and
iii
CONTENTS
ACKNOWLEDGMENTS iii
ABSTRACT viii
1 INTRODUCTION 1
SITION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
iv
4.1 APPLICATION 1: TURING PATTERNS . . . . . . . . . . . . . . . . . . . 31
5 CONCLUSION 40
A MATLAB CODE 42
REFERENCES 47
CURRICULUM VITAE 49
v
LIST OF FIGURES
vi
LIST OF TABLES
vii
ABSTRACT
Most traditional numerical methods for approximating the solutions of problems in science,
engineering, and mathematics require the data to be arranged in a structured pattern and
applications, this severe restriction on structure cannot be met, and traditional numerical
In the 1970s, radial basis function (RBF) methods were developed to overcome the
structure requirements of existing numerical methods. RBF methods are applicable with
scattered data locations. As a result, the shape of the domain may be determined by the
Radial basis function methods can be implemented both globally and locally. Compar-
isons between these two techniques are made in this work to illustrate how the local method
can obtain very similar accuracy to the global method while only using a small subset of
Finally, radial basis function methods are applied to solve systems of nonlinear partial
di↵erential equations (PDEs) that model pattern formation in mathematical biology. The
local RBF method will be used to evaluate Turing pattern and chemotaxis models that are
viii
Chapter 1
INTRODUCTION
Radial basis function (RBF) methods were first studied by Roland Hardy, an Iowa State
geodesist, in 1968, when he developed one of the first e↵ective methods for the interpolation
of scattered data [8]. Polynomial methods had previously been used, but they do not have
a unisolvency property for two-dimensional and higher dimensional scattered data. After
much investigation, Hardy developed what would later be known as the multiquadric (MQ)
radial basis function [9]. This is only one of many existing RBFs.
Then, in 1979, Richard Franke published a study of all known methods of scattered data
interpolation and concluded that the MQ RBF method was the best method. Because of
Franke’s extensive numerical experiments with the MQ, he is often credited for introducing
The next significant event in RBF history was in 1986 when Charles Micchelli, an
IBM mathematician, developed the theory behind the MQ method. He proved that the
system matrix for the MQ method was invertible, which means that the RBF scattered
data interpolation problem is well-posed [15]. Four years later, physicist Edward Kansa
first used the MQ method to solve partial di↵erential equations [12]. In 1992, results from
Wolodymyr Madych and Stuart Nelson [14] showed the spectral convergence rate of MQ
interpolation. Since Kansa’s discovery, research in RBF methods has rapidly grown, and
RBFs are now considered an e↵ective way to solve partial di↵erential equations [30]. All
RBF methods using an infinitely di↵erentiable RBF have been proven to be generalizations
1
Over the years, RBF interpolation has been shown to work in many cases where poly-
nomial interpolation has failed [20]. RBF methods overcome the limitation of polynomial
rectangular domains. RBF methods are frequently used to represent topographical surfaces
as well as other intricate three-dimensional shapes [23], having been successfully applied in
such diverse areas as climate modeling, facial recognition, topographical map production,
auto and aircraft design, ocean floor mapping, and medical imaging. RBF methods have
been actively developed over the last 40 years and the RBF research area remains very
In this work, comparisons will be made between global and local RBF approximation
methods. It will be shown how the local method can obtain very similar accuracy to that of
the global method while using only a small subset of available points. Hence, less computer
memory is required. This will be illustrated with several numerical examples including
those that involve pattern formation obtained from Turing and chemotaxis models.
2
Chapter 2
In order to e↵ectively understand RBF methods, several definitions are first required.
(x) = '(r),
defined for r 0 that has been radialized by composition with the Euclidean norm on Rd .
RBFs may have a free parameter, the shape parameter, denoted by ".
Definition 3. The scattered data interpolation problem states that given data (xj , fj ),
j = 1, . . . , N .
Given a set of N centers, xc1 , . . . , xcN , in Rd , a radial basis function interpolant is of the
form
N
X
s(x) = ↵j (k x xcj k2 ). (2.1)
j=1
The ↵j coefficients in the RBF are determined by enforcing the interpolation condition
s(xi ) = f (xi )
3
at a set of points that usually coincides with the N centers. Enforcing the interpolation
B↵ = f (2.2)
to be solved for the expansion coefficients ↵. The matrix B, called the interpolation matrix
In Section 2.3 it will be shown that the system matrix is always invertible.
For the distance matrix r (as defined in Definition 1) that is used in the calculation of
the system matrix, it may seem as though two loops should be created in Matlab. This can
N = length(xc);
for i=1:N
for j=1:N
end
end
However, when loops are used in Matlab, the program can take a long time to run, and
language is coded as a loop, can be replaced by a more efficient dot product with a vector
of ones.
In Matlab, the distance matrix is formed as follows where xc represents the vector of N
distinct centers.
o = ones(1,length(xc));
4
hij = (kxi xcj k2 ), i = 1, . . . , M and j = 1, . . . , N.
In Matlab, the evaluation matrix is formed in a manner similar to the system matrix where
xc represents the vector of N distinct centers and x is the vector of the M points at which
xc = xc(:);
x = x(:);
H = mqRbf(r,shape);
fa = H↵.
RBF approximation methods may be either global or local. The global approach uses
information from every center in the domain to approximate a function value or derivative
at a single point. In contrast, the local method only uses a small subset of the available
centers. In this section, the mechanics of the global interpolation method is illustrated in
For this example, the MQ RBF, as defined in Table 2.1, is used to interpolate a function.
Let f (x) = esin(⇡x) be restricted to the interval [0,1]. This function is interpolated using
the following three centers that are not evenly spaced: xc1 = 0, xc2 = 0.6, and xc3 = 1.
The interpolant is evaluated at the five evenly spaced evaluation points x1 = 0, x2 = 0.25,
x3 = 0.5, x4 = 0.75, and x5 = 1. For the MQ, a shape parameter is also required. We will
let " = 1.25. A discussion of how how to choose the best value of the shape parameter can
Let
↵ = [↵1 ↵2 ↵3 ]
5
be the unknown vector of the expansion coefficients, let
⇥ ⇤T
f = [f (xc1 ) f (xc2 ) f (xc3 )]T = 1 esin(0.6⇡) 1 ,
and let
2 3
c xc1 k2 ) (kxc1 xc2 k2 ) (kxc1 xc3 k2 )7
6 (kx1
6 7
B=6
6 (kx2
c xc1 k2 ) (kxc2 xc2 k2 ) (kxc2 xc3 k2 )77
4 5
(kxc3 xc1 k2 ) (kxc3 xc2 k2 ) (kxc3 xc3 k2 )
be the 3 ⇥ 3 system matrix. The linear system B↵ = f can now be solved resulting in the
The exact solutions are 1, 2.0281, 2.7183, 2.0281, and 1. The point-wise errors are as follows:
0.000000000000000
0.088631801660779
0.070973556874207
0.171280064870869
0.000000000000000.
This is illustrated in Figure 2.1. The errors at the endpoints are zero, as the evaluation
points coincide with centers, and the interpolation conditions dictate that the interpolant
agree with the function values at the centers. This is just a basic example to demonstrate
how MQ RBFs are computed, but in further examples, there are multiple strategies that
6
2.8
2.6
2.4
2.2
2
f(x)
1.8
1.6
1.4
1.2
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
Figure 2.1: Global RBF interpolation of the function f (x) = esin(⇡x) . Red asterisks denote
centers and open black circles denote evaluation points.
Three primary categories of radial basis function methods exist: compactly supported,
global with finite smoothness, and global infinitely di↵erentiable. Compactly supported
Definition 4. Algebraic convergence rates occur when error decays at the rate O(N m)
Second, we have the globally supported, finitely di↵erentiable radial basis functions,
which also have algebraic convergence rates [28]. Finally, there are the globally supported,
infinitely di↵erentiable radial basis functions. This category can achieve spectral conver-
Definition 5. Spectral convergence rates occur when error decays at the rate of O(⌘ N )
1
as N increases where 0 < ⌘ < 1. In particular with RBF methods, N is equal to "h , where
7
0
10
−2
10
−4
10
−6
10
error
−8
10
−10
10
−12
10
−14
10
0 1 2
10 10 10
N
Figure 2.2: Algebraic convergence (represented by blue asterisks) versus spectral conver-
gence (represented by red dots).
" is the shape parameter and h is the minimum separation distance between centers. Error
decays as both the shape parameter and minimum separation distance decrease.
Figure 2.2.
Four common RBFs that are globally supported and infinitely di↵erentiable are con-
tained in Table 2.1. The multiquadric (MQ) is arguably the most popular RBF that is used
in applications and is representative of the class of global infinitely di↵erentiable RBFs. For
8
2.3 OTHER PROPERTIES OF RBFS
Radial basis function methods have numerous properties. Several of those properties will
N X
X N
cj ck Ajk 0 (2.3)
j=1 k=1
for c = [c1 , ..., cN ]T 2 RN . If the inequality is zero only for c ⌘ 0, then A is called positive
definite.
A positive definite matrix is non-singular since all of the eigenvalues of a positive definite
matrix are positive. In other words, a well-posed interpolation problem exists if the basis
functions generate a positive definite system matrix. Definition 6 relates positive definite
functions to positive semi-definite matrices. To make sure that the interpolation problem
is well-posed, this definition can be refined to that of a strictly positive definite function.
nite on Rd if
N X
X N
cj ck (xj xk ) 0 (2.4)
j=1 k=1
for any N pairwise di↵erent points x1 , ..., xN 2 Rd , and c = [c1 , ..., cN ]T 2 CN . The function
The inverse quadratic, inverse multiquadric, and Gaussian functions are all strictly
positive definite and thus have a positive definite system matrix which is invertible.
After many experiments, Franke concluded that the system matrix was always uniquely
solvable because the linear system resulting from the scattered data interpolation problem
was nonsingular [6]. However, Franke never provided a proof. In 1986, Micchelli provided
the conditions necessary for the proof of Theorem 1, and he generalized the idea of uniquely
9
In order to understand Theorem 1, the definition of a function being completely mono-
tone is necessary.
i. 2 C[0, 1)
ii. 2 C 1 (0, 1)
p 0 (r)
Theorem 1. Let (r) = ( r) 2 C[0, 1) and (r) > 0 for r > 0. Let be completely
monotone and nonconstant on (0, 1). Then for any set of N distinct centers {xcj }N
j=1 , the
N ⇥ N matrix B with entries bjk = (k xkj xck k2 ) is invertible. Such a function is said to
p p
(r) = ( r) = 1 + "2 r (2.5)
0 "2
(r) = p
2 1 + "2 r
00 "4
(r) =
4 (1 + "2 r)3/2
(3) 3"6
(r) = (2.6)
8 (1 + "2 r)5/2
(4) 15"8
(r) =
16 (1 + "2 r)7/2
.. ..
. = .
From this pattern, it can be seen that the sign alternates for even and odd derivatives [4].
one.
10
The MQ contains a free variable, ", known as the shape parameter. The shape parameter
a↵ects both the conditioning of the system matrix and the accuracy of the RBF method
1 max
(B) = kBk2 B 2
= (2.7)
min
algorithm. The condition number is defined using any matrix norm. In the particular case
of the 2-norm, it is the ratio of the maximum to minimum singular values. The singular
values of B are represented by . As a rule of thumb, when the condition number is 10n , one
would expect to lose approximately n accurate digits when solving a general linear system
Ax = b.
N = 100 N = 100
−5 22
10 10
20
10
−6
10
18
10
Condition Number
|error|
−7 16
10 10
14
10
−8
10
12
10
−9 10
10 10
0 2 4 6 8 10 0 2 4 6 8 10
Figure 2.3: MQ RBF interpolation of the function, f (x) = ex . Left: Error versus the shape
parameter. Right: Condition number versus the shape parameter.
As shown is Figure 2.3, the condition number exponentially increases as the shape pa-
rameter decreases. In order for the system matrix to be well-conditioned, the shape param-
eter must not be too small. However, small shape parameters are required to obtain good
accuracy for the RBF method, but this results in the system matrix being ill-conditioned.
Hence, having the best accuracy and conditioning can obviously not occur at the same time.
11
This is known as the Uncertainty Principle, which indicates the more favorably valued one
For small ", the error improves, but the condition number grows. There are several
approaches that can be taken to find the best shape parameter. These approaches include
using the power function [29], “leave-one-out” cross validation [19], or the Contour-Pade
Algorithm [5]. When the system matrix is ill-conditioned, RBF methods are most accurate.
The shape parameter can be selected so that the resulting system matrix has a condition
number, (B), in the range 1013 (B) 1015 in order to determine the corresponding
value for ". These bounds for the condition number are valid when using a computer that
implements double precision floating point arithmetic, but the bounds will be di↵erent when
N
X
@ @
f (x) = ↵j (k x xcj k2 ). (2.8)
@xi @xi
j=1
n oN
When evaluated at the N centers, xcj , (2.8) becomes
j=1
@ @
f (x) = H↵. (2.9)
@xi @xi
@
hi,j = (kxci xcj k2 ), i, j = 1, . . . , N. (2.10)
@xi
From Equation (2.2), it is known that ↵ = B 1f . The di↵erentiation matrix can be defined
as
@ 1
D= HB . (2.11)
@xi
12
The derivative is then approximated as
@ @
⇡ f (x) = Df. (2.12)
@xi @xi
(Note: In Matlab, the inverse of B is not actually formed. Instead, D = H/B.) Using the
chain rule to di↵erentiate the radial basis function, [r(x)], the result is
@ d @r
= (2.13)
@xi dr @xi
✓ ◆2
@2 d @2r d2 @r
= + , (2.15)
@x2i dr @x2i dr2 @xi
with h i2
@r
@2r 1 @xi
= . (2.16)
@x2i r
and
d2 "2
= . (2.18)
dr2 [1 + "2 r2 ]3/2
For this example, f 0 (x) is approximated for f (x) = esin(⇡x) on the interval [0,1] using the
following three center locations xc1 = 0, xc2 = 0.6, and xc3 = 1 to approximate the derivative.
13
To begin, let
2 3
6 1 7
T 6 7
f = f (xc1 ) f (xc2 ) f (xc3 ) = 6esin(0.6⇡) 7
6
7,
4 5
1
and let
2 3
6 (kxc1 xc1 k2 ) (kxc1 xc2 k2 ) (kxc1 xc3 k2 )7
6 7
B=6
6 (kx2
c xc1 k2 ) (kxc2 xc2 k2 ) (kxc2 xc3 k2 )7
7
4 5
(kxc3 xc1 k2 ) (kxc3 xc2 k2 ) c c
(kx3 x3 k2 )
2 3
6 (k0 0k2 ) (k0 0.6k2 ) (k0 1k2 ) 7
6 7
=6
6 (k0.6 0k2 ) (k0.6 0.6k2 ) (k0.6 1k2 )7 7
4 5
(k1 0k2 ) (k1
0.6k2 ) (k1 1k2 )
2 3
6 1 1.25 1.60087
6 7
=6
6 1.25 1 1.11807
7
4 5
1.6008 1.1180 1
14
In Matlab, the di↵erentiation matrix is formed as D = H/B, and this is calculated to
be
2 3
6 2.0783 3.1217 1.13937
6 7
D=6
6 0.7439 0.8939 1.6313 7
7.
4 5
0.5809 3.4902 2.9723
fa0 = Df
2 32 3
6 2.0783 3.1217 1.13937 6 1 7
6 76 7
=6
6 0.7439 0.8939 7 6
1.6313 7 6e sin(0.6⇡) 7
7
4 54 5
0.5809 3.4902 2.9723 1
2 3
6 4.8628 7
6 7
=6 7
6 1.42657 .
4 5
5.4810
The exact derivative of f (x) = esin(⇡x) is f 0 (x) = ⇡esin(⇡x) cos (⇡x). The results for this
example are represented in Figure 2.4. If more centers are added, it is possible to obtain
better accuracy.
15
6
2
f (x)
−2
−4
−6
0 0.2 0.4 0.6 0.8 1
x
Figure 2.4: The green line represents the derivative of f (x) = esin(⇡x) . The red asterisks are
the exact values at the centers. The black open circles are the approximate values at the
evaluation points.
16
Chapter 3
Up until this point in this thesis, the main focus has been on the global method for radial
basis functions. In many cases, the local method can be as accurate as the global method.
The main advantage to using the local method is that less computer storage and flops are
X
In f (x) = ↵k (k x xck k2 ) (3.1)
k2Ii
at each of the N centers xc1 , . . . , xcN in Rd . In the above equation, Ii is a vector associated
with center i that contains the center number and the indices of the n 1 nearest neighboring
centers, ↵ is a vector of expansion coefficients, and is a RBF. Each center and its n 1
neighbors are called a stencil. The ↵k coefficients in Equation (3.1) are chosen by enforcing
In f (xk ) = fk (3.2)
with k 2 Ii on each stencil. This gives N linear systems each with dimension n ⇥ n of the
17
form
B↵ = f. (3.3)
This equation will be solved for the expansion coefficients. Just like the global method, the
matrix B is called the interpolation matrix or the system matrix with entries
Because the system matrix B is always invertible, the expansion coefficients are uniquely
linear di↵erential operator L to the local interpolant of the RBF. Depending on the problem
is then obtained by evaluating it at the center where the stencil is based. The equation can
hi = L (k x xcj k2 ), (3.6)
and ↵ is the n ⇥ 1 vector of RBF expansion coefficients. The equation can be simplified by
recognizing that
1 1
Lf (xi ) = hB f (Ii ) = (hB )f (Ii ) = w · f (Ii ) (3.7)
18
The weights are the solution of the linear system
wB = h. (3.9)
Derivatives are approximated by multiplying the weights by the function values at the
centers.
For this example, f 0 (x) is approximated for f (x) = esin(⇡x) on the interval [0,1] using the
local RBF method with " = 0.43. Let x0 , . . . , x10 be N = 11 evenly spaced points on the
interval [0,1]. For convenience, a stencil size of n = 3 will be chosen. In order to approximate
the derivative at x5 = 0.5, theoretically the equation f 0 (x5 ) ⇡ w4 f (x4 ) + w5 f (x5 ) + w6 f (x6 )
and
h = L (k0.5 0.4k2 ) L (k0.5 0.5k2 ) L (k0.5 0.6k2 )
= 0.0185 0 0.0185
to find that
2 3
6 1 1.0009 1.00377
6 7
61.0009 1.00097
w4 w5 w6 6 1 7 = 0.0185 0 0.0185 .
4 5
1.0037 1.0009 1
19
5
1
f (x)
−1
−2
−3
−4
−5
0 0.2 0.4 0.6 0.8 1
x
Figure 3.1: The green line represents the derivative of f (x) = esin(⇡x) . The red asterisks are
the exact values at the centers. The black open circles are the approximate values at the
evaluation points using local RBF approximation.
f 0 (x5 ) ⇡ w4 f4 + w5 f5 + w6 f6
20
The Matlab results for this example when the local RBF method is applied at each data
point are represented in Figure 3.1. The red asterisks are the exact derivatives at each of
the 11 points, and the open circles represent the approximated solutions found using the
local method.
In order to calculate wB = h, the singular value decomposition (SVD) of the system matrix
P
B is used. The SVD of B = U V T . Since B is an N ⇥ N matrix, U and V are N ⇥ N
P
orthogonal matrices [26]. The matrix is a diagonal N ⇥ N matrix having N singular
values of B as its nonzero elements. Because U and V are orthogonal, the inverse of each
these two matrices is the transpose of the matrix. The inverse of the system matrix is
P 1 T
V U , and by multiplying both sides of Equation (3.9) by B 1 , the weights utilized in
Condition numbers can also be calculated using the SVD. In the following pseudocode,
the shape parameter is adjusted until the condition number is in the desired range of
condtionNumber = 1
K = conditionNumber
construct B
[U,S,V] = svd(B)
condtionNumber = maximum(S)/minimum(S)
if K < minimumConditionNumber
21
3.2 GLOBAL VERSUS LOCAL RBF APPROXIMATION
As previously mentioned, the local RBF method on various stencils can be just as accurate
and efficient as the global RBF method. This is especially true for derivative approximation,
In Section 3.1, a small example was examined that utilized global RBF interpolation.
For this section, an example with a larger N that uses the global method to approximate the
derivative of a function will be analyzed. It will then be compared to the local derivative
3
f (x) = ex cos(2x) (3.10)
will be approximated on the interval [ 1, 1] using both the global and local RBF method.
10
6
f (x)
−2
−1 −0.5 0 0.5 1
x
3
Figure 3.2: The green line represents the derivative of f (x) = ex cos(2x). The red
asterisks are the exact values at the centers. The black open circles are the approximate
values at the evaluation points using global RBF approximation.
22
The exact derivative of Equation (3.10) is
3
f 0 (x) = 3ex x2 + 2 sin(2x). (3.11)
3
In Figure 3.2, the green line represents the derivative, f 0 (x) = 3ex x2 + 2 sin(2x), and
the red asterisks are the exact values of the derivative at 100 data points. The black open
circles are the approximate solutions at each of the corresponding 100 data points using the
global RBF method. The error ranges from 8.7128 ⇥ 10 11 to 0.0562 at the right endpoint.
A typical result with RBF methods is that the accuracy of the global method can
be matched with the local method with a problem dependent n < N . This can be seen in
Table 3.1 where the local method was used to approximate the derivative of Equation (3.11).
When the stencil size was equal to 7, it was found that the average error resembled the
After the space derivatives of a PDE have been discretized by the RBF method, a system
of ODEs
ut = F (u) (3.12)
remains to be advanced in time. Any numerical ODE method can be used. This approach
is called the method of lines. In all numerical examples, the following fourth-order Runge-
23
Kutta method
k1 = tF (un , tn )
k4 = tF (un + k3 , tn + t)
1
un+1 = un + (k1 + 2k2 + 2k3 + k4 )
6
has been used [1]. The eigenvalues (scaled by t) of the discretized linearized operator must
lie within the stability region of the ODE methods. A Runge-Kutta stability region, along
with the scaled eigenvalues of a 1d advection di↵usion operator, is shown in Figure 3.3.
As a rule of thumb, the RBF method is stable if the eigenvalues of the linearized spatial
operator [25]. As can be seen in Figure 3.3, several eigenvalues are located outside of
the Runge-Kutta stability region when " = 1.5. Hence, instability occurs for this one-
dimensional advection-di↵usion equation when the shape parameter equals 1.5. When the
shape parameter was changed to " = 6, all of the eigenvalues were contained in the region,
ut + ux ⌫uxx = 0 (3.14)
✓ ◆ ⇣x⌘ ✓ ◆
1 x t x+t
u(x, t) = erfc p + exp erfc p . (3.15)
2 2 ⌫t ⌫ 2 ⌫t
Initial and boundary conditions are prescribed according to the exact solution. Let N =
51 evenly spaced centers, " = 6 be the shape parameter, and ⌫ = 0.002. The fourth-order
Runge-Kutta method is used with a time-step of 0.005 to advance the problem until the
final time is 0.5. At t = 0.5, the maximum point-wise error was 4.72 ⇥ 10 4. These results
24
3 3
2 2
1 1
t)
t)
0 0
imag(
imag(
−1 −1
−2 −2
−3 −3
−3 −2 −1 0 1 −3 −2 −1 0 1
real( t) real( t)
Figure 3.3: Runge-Kutta stability region with the eigenvalues from Equation (3.14). Left:
Instability with " = 1.5 and (B) = 1.3482 ⇥ 1019 . Right: Stability with " = 6 and
(B) = 7.5254 ⇥ 1013 .
Because this is a one-dimensional problem on a small interval, the global method can
be used. However, this will not be the case later in two-dimensional problems that may
1
5
0.8
4
0.6
3
0.4
2
0.2
1
0
−0.2 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Figure 3.4: Left: Exact solution versus MQ solution of Equation 3.14. Right: Point-wise
error.
25
3.4 2D ADVECTION DIFFUSION REACTION
This next example is of an advection di↵usion reaction equation with an exact solution
available. This equation has all the elements of the more complicated chemotaxis and
where ⌫ is the viscosity coefficient, ↵ is the advection coefficient, and is the reaction
coefficient. The partial di↵erential equation is linear with a nonlinear reaction term.
1
Consider Equation (3.16) with ⌫ = 0.5, ↵ = 1, and = ⌫. The analytical solution is
given by
1
u(x, y, t) = (3.17)
1+ ea(x+y bt)+c
q p
where a = 4⌫ , b = 2↵ + ⌫, and c = a(b 1). Dirichlet boundary conditions are
prescribed for t > 0 by using the exact solution. As a domain, take the circle of radius 1.5
When locating the boundary points on a circle, the results may not be very precise.
By simply setting the distance from the circle’s center to the boundary point equal to the
radius, the computer would be storing the points as floating point numbers. This problem
can be eliminated by using the distance between adding a factor of ‘100 ⇥ machine epsilon’
to the radius and subtracting a factor of ‘100 ⇥ machine epsilon.’ Machine epsilon refers to
the smallest positive number that when added to 1 results in a number that the computer
(sqrt(x.^2+y.^2)>(R-100*eps)));
For this example, the global method is implemented on 4000 points in a circle beginning
at a time of t = 0 and moving by a time step of 0.0005 until the time equals 5. The
condition number was equal to 6.9955 ⇥ 1014 and the error was 7.4401 ⇥ 10 6. The results
26
1 1
0.5 0.5
0 0
−0.5 −0.5
−1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
Figure 3.5: Left: Circle with N = 500 centers. Right: Stencil size of n = 21. The center
of the stencil is marked with blue asterisk, and the other points in stencil are marked with
red squares.
are illustrated in Figures 3.6 and 3.7. It took two minutes for the condition number to be
computed and seventeen minutes for the entire program to run. When trying to increase
On an average desktop computer, the global method cannot be implemented with a large
number of centers, such as 15,000, due to memory restraints when the 15,000 ⇥ 15,000 dense
matrix is formed and every element has to be stored. However, it can implement the local
method on these problems. Using the local method, sparse matrices are stored, and we
can obtain desired accuracy. This is illustrated in Table 3.2 using Equation (3.16). When
the local method is executed with n = 15, it is just as accurate as when n = 300. As the
stencil size decreases, not only is less computer memory needed, but the faster the execution
time of the program. An example of a circle with N = 500 centers with a stencil size of
n = 21 is illustrated in Figure 3.5. A problem with a large number of centers that would be
impossible to run on an average desktop computer using the global method (due to memory
27
0.06
0.05
0.04
0.03
0.02
0.01
0
0
4 1 0
3 2
28
t=5
−6
x 10
8
0
3
1 3
2.5
2
1.5
y 1
0 0.5
0
x
0.8
0.6
0.4
0.2
0
0
5
10
8
6
4
10 2
0
29
−5
x 10
1.4
1.2
0.8
0.6
0.4
0.2
0
−5
0
5
10
0 −5
15 10 5
15
30
Chapter 4
Many phenomena that occur in nature can be modeled using partial di↵erential equa-
tions. More specifically, two such biological examples that utilize time-dependent advection-
di↵usion-reaction equations are Turing patterns and chemotaxis. RBF methods can be im-
plemented efficiently on these types of problems with large data sets on complexly shaped
domains to approximate the solutions. However, as it will be seen, the global RBF method
Alan Turing (1912–1954) is perhaps most notably remembered as an early computer science
visionary who conceptualized a universal computing device now known as a Turing machine.
Later in his life, Turing became interested in modeling how genes of an organism can be
seen in physical traits, such as spots and stripes. Examples of Turing patterns can be seen
a method for di↵using chemicals, this system has two primary parts. First, there is an
activator that can generate more of itself. Second, there is an inhibitor. The inhibitor slows
down the activator. Patterns are formed when the chemicals are a↵ected by the activator
and inhibitor and spread across a given region. In the 1980s, these patterns were able to
31
Figure 4.1: Left: Many animals, such as the leopard, have prints that resemble Turing
patterns. Right: Galaxies are also believed to exhibit properties of Turing patterns [13].
vt = O2 v + g(u, v).
The evolution of the chemical concentration is given by u(x,y,t) and v(x,y,t) at spatial
position (x,y) at time t. D is a constant di↵usion coefficient that is a ratio of the di↵usion
coefficients of the two chemical concentrations. The functions f and g model the reaction.
They are nonlinear reaction terms of the chemical. Under certain conditions, various pat-
terns, such as dots or stripes, form. In addition to parameter values, these patterns vary
↵⌧1
vt = O2 v + v(1 + uv) + u(↵ + ⌧2 v)
on a domain shaped like a butterfly. The boundary of the domain is outlined by the
32
Figure 4.2: Left: Turing patterns observed on a fish. Right: Computer generated Turing
pattern [13].
parametric curve
The parameters in the equations were chosen according to the results found in [18, 3]
where spotted patterns were formed with D = 0.516, = 0.0045, ↵ = 0.899, ⌧1 = 0.02,
= 0.91, ⌧2 = 0.2, and = ↵. Zero Dirichlet boundary conditions are applied to u and
v where u and v have initial conditions that were randomly chosen between -0.5 and 0.5.
On the butterfly domain, there are 8,125 centers, and the local RBF method is applied on
a stencil size of 100. The results demonstrate that spotted patterns can be formed from the
Turing system. This is illustrated in Figure 4.3. These findings are in qualitative agreement
33
t = 120
6
2
y
−2
−4
−5 0 5 10
x
Figure 4.3: A butterfly shaped domain with the solution to the Turing system from Equation
(4.2) at time t = 120.
bacteria to the most complex vertebrates. The single-celled prokaryotic bacteria move away
from hostile conditions and toward nutrient clusters using chemotaxis. Similarly, the DNA
carrying eukaryotes use chemotaxis for immune responses and wound healing. So what
exactly is chemotaxis?
At its most basic level, chemotaxis is cell movement. The prefix “chemo” means “com-
bining form” whereas the suffix “taxis” is Greek for “arrange, turning.” By definition,
chemotaxis is the movement of a motile cell or organism, or part of one, in a direction cor-
[11].
34
Budrene and Berg’s biological experiments on the patterns that Escherichia coli and
Salmonella typhimurium form provide the basis for our research [2]. Simply stated, when
bacteria were exposed to a liquid medium, patterns in the bacteria materialize and rearrange
before they eventually disappear. Their experiments showed that the biological processes
that caused the patterns formed by the bacteria were the results of random migration and
chemotaxis.
In order for the patterns to form, the bacteria were exposed to tricarboxylic acid (TCA).
Succinate and fumarate accounted for the most e↵ective results. In response to the TCA,
the bacteria released the strong chemoattractant aspartate. This chemoattractant is what
causes the cells to move in a direction corresponding to a particular gradient. This movement
increases cell density whereas di↵usion has the opposite e↵ect. These two combatting
forces is the primary reason behind the formation of the patterns. Hence, when studying
chemotaxis, we must focus on the cells, the stimulate, and the chemoattractant. In Budrene
and Berg’s experiments, these were respectively the bacteria, succinate and fumarate, and
aspartate [2].
Tyson, Lubkin, and Murray ultimately analyzed a second order ordinary di↵erential
equation modeled after Budrene and Berg’s experiments in order to understand the numer-
ical properties of chemotaxis [27]. Their research focused on the patterns formed by E. coli
and salmonella bacteria as they undergo chemotaxis in liquids. The dimensionless form of
the mathematical model that was used for the experiments as defined in [27] is
u
2
ut = du r u ↵r ⇧ rv (4.4)
(1 + v)2
u2
v t = r2 v + w , (4.5)
µ + u2
where u represents the cell density, v is the chemoattractant concentration, and w represents
the succinate concentration. For the experiments, the succinate is not consumed, and it is
35
u
In order to simplify the calculations, let f = (1 v 2 )
and let v = hvx , vy i. Then Equa-
ut = du (uxx + uyy ) ↵r ⇧ hf vx , f vy i
⇣ ⌘
= du (uxx + uyy ) ↵ (f vx )x + (f vy )y (4.6)
✓ ◆
u2
vt = (vxx + vyy ) + w . (4.7)
µ + u2
fp = zeros(N,2);
- alpha*((d2*V(:,2)).*(V(:,1)./(1+V(:,2)).^2)...
+ (d1x*V(:,2)).*(d1x*(V(:,1)./(1+V(:,2)).^2)))...
- alpha*((d1y*V(:,2)).*(d1y*(V(:,1)./(1+V(:,2)).^2)));
The goal of this two dimensional chemotaxis experiment is to find solutions to (4.4) and
(4.5) that oscillate, grow, and finally decay as time progresses. To simplify the results, like
the experiments of Tyson, Lubkin, and Murray, we assumed that the bacteria do not die
36
[27]. Therefore, a death variable was not included in Equation (4.4). Similarly, even though
in the experiments the chemoattractant increases continually, and di↵usion eventually dom-
As can be seen from Figures 4.4 and 4.5, the initially random bacteria arrange themselves
into high-density collections. It is obvious that the collections of cells are very dense as
compared to the regions between each collection. When t = 1, there is one cluster in the
center. However, when t = 75, it is evident that a set of continuous rings has developed.
If we continued to increase the amount of time that passes, it is likely that the clusters
will reach a steady-state pattern. This is exactly what happened in Tyson, Lubkin, and
Murray’s numerical experiments and Budrene and Berg’s biological experiments [2, 27].
37
t = 20
t = 43
25 25
20 20
15 15
10 10
5 5
0
y
y
−5 −5
−10 −10
−15 −15
−20 −20
−25 −25
−30 −20 −10 0 10 20 30 −30 −20 −10 0 10 20 30
x x
t = 55 t = 65
25 25
20 20
15 15
10 10
5 5
0 0
y
−5 −5
−10 −10
−15 −15
−20 −20
−25 −25
−30 −20 −10 0 10 20 30 −30 −20 −10 0 10 20 30
x x
Figure 4.4: Density plots of the chemotaxis experimental results when Top Left: t=20, Top
Right: t=43, Bottom Left: t=55, Bottom Right: t=65
38
t = 43
t = 20
30
1.5
25
20
1
15
10
0.5
5
0
0 40
40
20 40
20 40
0 20
0 20
0 0
−20 −20
−20 −20
y −40 −40 y −40 −40
x x
t = 55
t = 65
30
25 30
20 25
15 20
15
10
10
5
5
0
40 0
40
20 40
20 20 40
0 20
0 0
−20 0
−20 −20 −20
y −40 −40 −40
x y −40 x
Figure 4.5: Chemotaxis experimental results when Top Left: t=20, Top Right: t=43,
Bottom Left: t=55, Bottom Right: t=65
39
Chapter 5
CONCLUSION
Due to the extreme flexibility of the RBF methods, they have attracted much attention from
mathematicians and scientists. RBF methods overcome some of the deficiencies of polyno-
mial methods and can be applied on scattered data sites and in regions with complicated
As seen in this thesis, both global and local methods can be used when working with
radial basis functions. Specifically, these methods were applied to Turing equations and
chemotaxis models, but the applications are endless. Similar to these examples, other situ-
ations arise that require tens of thousands of data points for centers, and desktop computers
cannot efficiently execute such large problems in a timely manner with the global method.
Not only is time an issue, but so is computer memory. The local method overcomes these
issues by allowing the user to select small stencils of points that can provide very similar
accuracy to the global method while using substantially less computer memory. As a result,
the local method can execute e↵ectively in a considerable less amount of time.
often done heterogeneously with both CPUs and GPUs. For a future direction of this work,
a high performance computer (HPC) cluster could be used to dramatically increase the
accuracy and efficiency of radial basis function methods. By using a HPC cluster, it would
be possible to carry out extensive calculations that require large amounts of data storage
and memory allocation in a vastly shorter time frame than the typical desktop computer.
40
currently features 276 central processing unit cores, 552 gigabytes of memory, and more than
10 terabytes of storage. It currently has 8 NVIDIA Tesla M2050 GPU computing modules
installed with 448 cores each. This configuration provides support for extremely large
parallel computation with an estimated peak of six Teraflops. This is equal to six trillion
floating point operations per second. Because RBF methods are an active research area
with widespread application in engineering and science, formulating the methods to work
efficiently on multi-core processors will further enhance the popularity and applicability of
41
Appendix A
MATLAB CODE
1 %
% d r i v e r D .m
3 %
11 i f CENTERS
disp ( ’ Centers ’ )
13 xc = l i n s p a c e ( 1 ,1 ,100) ’;
xp = l i n s p a c e ( 1 ,1 ,200) ’;
15 end
17 i f STENCILS
disp ( ’ S t e n c i l s ’ )
19 ns = 17 % stencils size
s t = s t e n c i l s D ( xc , ns ) ;
21 end
23 i f WEIGHTS
d i s p ( ’ Weights ’ )
25 shape = 0 . 4 3 ; % i n i t i a l shape p a r a m e t e r
dc = 0 . 0 0 1 ; % shape p a r a m e t e r i n c r e m e n t
27 minK = 1 e5 ; % minimum c o n d i t i o n number o f t h e system m a t r i x
maxK = 1 e15 ; % maximum c o n d i t i o n number o f t h e system m a t r i x
29 D = weightsD ( s t , xc , shape , minK , maxK, dc ) ;
42
end
31
i f GO
33 d i s p ( ’ Graphing ’ )
u = go ( xc , D, xp ) ;
35 end
1 %
% s t e n c i l s D .m
3 %
5 f u n c t i o n s t = s t e n c i l s D ( xc , ns )
7 N = l e n g t h ( xc ) ;
9 f o r i =1:N % s t e n c i l s f o r d e r i v a t i v e approximation
x0 = xc ( i ) ;
11 r = abs ( xc ( : ) x0 ) ; %d i s t a n c e between c e n t e r i and t h e r e s t o f t h e c e n t e r s
[ r , ix ] = sort ( r ) ;
13 s t ( i , : ) = i x ( 1 : ns ) ;
end
1 %
% weightsD .m
3 %
warning o f f
17 N = l e n g t h ( xc ) ; % total centers
n = length ( st ( 1 , : ) ) ;
19 o = ones (1 , n ) ;
43
D = s p a r s e (N, n ) ;
21
pn = s t ( i , : ) ;
25 r x = xc ( pn ) ⇤ o ( xc ( pn ) ⇤ o ) ’ ;
r = abs ( r x ) ;
27 K = 1;
29 w h i l e (K<minK | | K>maxK)
B = mq( r , shape ) ;
31 [ U, S ,V] = svd (B ) ;
K = S(1 ,1)/S(n , n ) ;
33 i f K<minK , shape = shape dc ;
e l s e i f K>maxK, shape = shape + dc ; end
35 end
37 Bi = V⇤ d i a g ( 1 . / d i a g ( S ) ) ⇤U ’ ;
39 h = m q D e r i v a t i v e s ( s q r t ( ( xc ( i ) xc ( pn ) ) . ˆ 2 ) , xc ( i ) xc ( pn ) , shape , 1 ) ;
41 D( i , pn ) = h ’ ⇤ Bi ;
43 end , warning on
%
2 % go .m
%
4
f u n c t i o n u = go ( xc , D, xp )
6
f = exp ( xc . ˆ 3 ) c o s ( 2 . ⇤ xc ) ;
8
f D e r i v a t i v e E x a c t = 3 . ⇤ exp ( xc . ˆ 3 ) . ⇤ xc . ˆ 2 + 2 . ⇤ s i n ( 2 . ⇤ xc ) ;
10
f D e r i v a t i v e A p p r o x = D⇤ f ;
12
f o r m a t l o n g , f o r m a t compact
14 p o i n t W i s e E r r o r s = abs ( f D e r i v a t i v e A p p r o x fDerivativeExact )
format
16
44
mean ( p o i n t W i s e E r r o r s )
18
twoNormError = norm ( f D e r i v a t i v e A p p r o x f D e r i v a t i v e E x a c t , 2 )
20
p l o t ( xp , 3 . ⇤ exp ( xp . ˆ 3 ) . ⇤ xp . ˆ 2 + 2 . ⇤ s i n ( 2 . ⇤ xp ) , ’ g ’ , xc , f D e r i v a t i v e E x a c t , ’ r ⇤ ’ , ...
22 xc , f D e r i v a t i v e A p p r o x , ’ ko ’ )
xlabel ’x ’ , ylabel ’ f ˆ\ prime ( x ) ’
1 function advectionDiffusion1D ()
3 N = 51; %number o f c e n t e r s
dt = 0 . 0 0 5 ; %t i m e s t e p
5 finalTime = 0 . 5 ; %f i n a l time
a = 1; %v a r i a b l e i n e q u a t i o n advection c o e f f i c i e n t
7 nu = 0 . 0 0 2 ; %v a r i a b l e i n e q u a t i o n diffusion coefficient
shape = 6 ; %shape p a r a m e t e r
9
x = l i n s p a c e ( 0 , 1 ,N ) ’ ;
11 o = ones (1 , length ( x ) ) ;
r x = x⇤ o ( x⇤ o ) ’ ; % s i g n e d d i s t a n c e m a t r i x
13 r = abs ( r x ) ; % d i s t a n c e m a t r i x
15 H = z e r o s (N,N ) ; % e v a l u a t i o n m a t r i x
U = e x a c t S o l u t i o n ( x , 0 ) ; %i n i t i a l c o n d i t i o n
25
t =0;
27
45
end
35
p l o t ( x , exact , ’ r ’ , x , u , ’ b ’ )
39
figure
41 p l o t ( x , abs (U e x a c t ) )
43 f u n c t i o n f p = F( u , t )
u(1) = 1;
45 u (N) = e x a c t S o l u t i o n ( 1 , t ) ;
f p = dm⇤u ;
47 end
49 f u n c t i o n ex = e x a c t S o l u t i o n ( x , t )
i f t<dt
51 i f l e n g t h ( x)>1
ex ( 1 ) = 1 ;
53 ex ( 2 : l e n g t h ( x ) ) = 0 ;
ex = ex ( : ) ;
55 e l s e , ex = 0 ;
end
57 else
den = 2 . 0 ⇤ s q r t ( nu⇤ t ) ;
59 f r a c 1 = ( x t ) / den ;
f r a c 2 = ( x+t ) / den ;
61 ex = 0 . 5 ⇤ ( e r f c ( f r a c 1 ) + exp ( x/nu ) . ⇤ e r f c ( f r a c 2 ) ) ;
end
63 end
65 end
Matlab code used for the 2d advection-di↵usion-reaction problem, the Turing system,
46
REFERENCES
[1] J. P. Boyd, Chebyshev and Fourier spectral methods, second ed., Dover Publications,
Mineola, New York, 2000.
[2] E.O. Budrene and H.C. Berg, Complex patterns formed by motile cells of Escheria Coli,
Nature (1991), 349:630–633.
[3] J. L. Aragon C. Varea and R. A. Barrio, Turing patterns on a sphere, The American
Physical Society 60 (1999), no. 4, 4588–4592.
[4] B. Fornberg and N. Flyer, Accuracy of radial basis function interpolation and deriva-
tive approximations on 1-d infinite grids, Advances in Computational Mathematics 23
(2005), 37–55.
[5] B. Fornberg and G. Wright, Stable computation of multi quadratic interpolants for
all values of the shape parameter, Computers and Mathematics with Applications 47
(2004), 497–523.
[6] R. Franke, A critical comparison of some methods for interpolation of scattered data,
Technical Report NPS (1979), 53–79.
[7] D. Goldberg, What every computer scientist should know about floating-point arith-
metic, Computing Surveys (1991), 171–264.
[8] R. L. Hardy, Multiquadric equations of topography and other irregular surfaces, Journal
of Geophysical Research 76 (1971), no. 8, 1905–1915.
[11] T. Jin and D. Hereld, Chemotaxis methods and protocols, Humana Press, New York,
2009.
[13] Brandon Keim, Alan Turing’s patterns in nature, and beyond, http://www.wired.
com/wiredscience/2011/02/turing-patterns/.
47
[14] W. R. Madych and S. A. Nelson, Bounds on multivariate interpolation and exponential
error estimates for multiquadric interpolation, Journal of Approximation Theory 70
(1992), 94–114.
[15] C. Micchelli, Interpolation of scattered data: Distance matrices and conditionally pos-
itive definite functions, Constructive Approximation 2 (1986), 1122.
[16] M. L. Overton, Numerical computing with IEEE floating point arithmetic, Society for
Industrial and Applied Mathematics, Philadelphia, 2001.
[17] R. B. Platte, How fast do radial basis function interpolants of analytic functions con-
verge?, IMA Journal of Numerical Analysis 31 (2011), no. 4, 1578–1597.
[18] J. L. Aragon R. A. Barrio, C. Varea and P. K. Maini, A two-dimensional numerical
study of spatial pattern formation in interacting turing systems, Bulletin of Mathemat-
ical Biology 61 (1999), 483–505.
[19] S. Rippa, An algorithm for selecting a good value for the parameter c in radial basis
function interpolation, Advances in Computational Mathematics 11 (1999), 193–210.
[20] S. A. Sarra, Radial basis function approximation methods with extended precision float-
ing point arithmetic, Engineering Analysis with Boundary Elements 35 (2011), no. 1,
68–76.
[21] , A local radial basis function method for advection-di↵usion-reaction equations
on complexly shaped domains., To appear in Applied Mathematics and Computation
(2012).
[22] S. A. Sarra and E. J. Kansa, Multiquadric radial basis function approximation methods
for the numerical solution of partial di↵erential equations, vol. 2, Advances in Compu-
tational Mechanics, 2009.
[23] T. Sauer, Numerical analysis, Pearson Education, Inc., Boston, 2006.
[24] R. Schaback, Error estimates and condition numbers for radial basis function interpo-
lation, Advances in Computational Mathematics 3 (1995), 251–264.
[25] L. N. Trefethen, Spectral methods in Matlab, SIAM, Philadelphia, 2000.
[26] L. N. Trefethen and III D. Bau, Numerical linear algebra, Society for Industrial and
Applied Mathematics, Philadelphia, Pennsylvania, 1997.
[27] R. Tyson, S.R. Lubkin, and J.D. Murray, Model and analysis of chemotactic bacterial
patterns in a liquid medium, Journal of Mathematical Biology (1998), 38:359–375.
[28] H. Wendland, Scattered data approximation, Cambridge University Press, Cambridge,
2005.
[29] Z. Wu and R. Schaback, Local error estimates for radial basis function interpolation of
scattered data, IMA Journal of Numerical Analysis 13 (1993), 13–27.
[30] J. R. Xaio and M. A. McCarthy, A local heaviside weighted meshless method for two-
dimensional solids using radial basis functions, Computational Mechanics 31 (2003),
301–315.
48
Maggie Elizabeth Chenoweth
[email protected]
Education
• Master of Arts. Mathematics. Marshall University, May 2012. Thesis Advisor: Scott
Sarra.
• MTH 121 - Concepts and Applications (Critical Thinking) - Fall 2010 and Fall 2011
• MTH 127 - College Algebra (Expanded) - Summer 2011 and Fall 2011
2. A Local Radial Basis Function Method for the Numerical Solution of Partial Di↵er-
ential Equations. Master’s thesis, Marshall University, May 2012.
Professional Affiliations
• Pi Mu Epsilon - Former Chapter President at Marshall University
• Sigma Xi
• Kappa Delta Pi
49
Awards and Recognitions
• Dean’s List
• 200 Level Book Award for the Marshall University Honors Program
50