Sensors 22 08845
Sensors 22 08845
Article
Implementation of Kalman Filtering with Spiking
Neural Networks
Alejandro Juárez-Lora *,† , Luis M. García-Sebastián † , Victor H. Ponce-Ponce , Elsa Rubio-Espino ,
Herón Molina-Lozano and Humberto Sossa
Instituto Politécnico Nacional, Centro de Investigación en Computación, Mexico City 07738, Mexico
* Correspondence: [email protected]
† These authors contributed equally to this work.
Abstract: A Kalman filter can be used to fill space–state reconstruction dynamics based on knowledge
of a system and partial measurements. However, its performance relies on accurate modeling of the
system dynamics and a proper characterization of the uncertainties, which can be hard to obtain
in real-life scenarios. In this work, we explore how the values of a Kalman gain matrix can be
estimated by using spiking neural networks through a combination of biologically plausible neuron
models with spike-time-dependent plasticity learning algorithms. The performance of proposed
neural architecture is verified with simulations of some representative nonlinear systems, which
show promising results. This approach traces a path for its implementation in neuromorphic analog
hardware that can learn and reconstruct partial and changing dynamics of a system without the
massive power consumption that is typically needed in a Von Neumann-based computer architecture.
Keywords: Kalman filter; artificial intelligence; spiking neural networks; robotics; dynamics
solution [8]. However, the Von Neumann computer architecture, which is present in all
commercially available computing solutions, separates processing and storage into different
functional units. Emulating an ANN, a computing strategy that inherently performs storage
and processing as closely as possible creates a data bottleneck, as the interactions of neurons
and synapses are represented in terms of massive matrix multiplication. The state-of-the-art
research on ANNs is performed with sizable graphic processing units (GPUs) or multiple-
core computing solutions, the power consumption of which is estimated to surpass humans’
current energy generation capacity if this rate continues [9].
It is necessary to rethink how to perform computing in a move away from the Turing
machine, which requires many layers of abstraction, into parallel hardware with distributed
memory [10]. Spiking neural networks (SNNs) are considered the third generation of ANNs.
These models reflect complex biological and temporal dynamics in order to construct
artificial software/hardware counterparts with the same behaviors as those of neurons and
synapses [11]. Neuromorphic computing has emerged as a branch in computer science
that aims to create computer architectures that resemble the brain’s energy efficiency,
learning plasticity, and computing capacity [12]. This has become the inherited hardware
platform for SNNs, which usually dictate the design of the building blocks for hardware
solutions [13] that are usable for robotic platforms. For instance, in [14,15], an SNN learned
the inverse kinematics (IKs) of a robotic arm manipulator, which are usually hard to obtain.
While such networks can be used to reconstruct IK values, the extraction of the specific
modeling functions from the network is still a research topic.
On this basis, the development of neuromorphic accelerators based on existing com-
plementary metal-oxide semiconductor (CMOS) digital technology is enabling research
in neuromorphic computing. such technologies usually include peripheral devices and
software/hardware bridges with conventional computing architectures, thus enabling
network analysis, performance measurement, and reconfiguration, such as in Intel’s Loihi 2
chip with 130 million silicon neurons and 256 million synapses [16], which is programmable
with the Lava neuromorphic compiler, Truenorth from IBM with 1 million neurons and
256 synapses [17], or BrainChip’s Akida [18], which was built with TSCM’s 28 nm tech-
nology, among others. These have been used to obtain remarkable results in robotics [19],
sensing, and classification tasks. However, as the construction of these accelerators relies
on expensive proprietary CMOS chip technologies, they face the same scaling and energy
consumption limits [20] as those of their Von Neumann counterparts.
From the perspective of analog electronics, a passive electric device called a memris-
tor [21], which was theorized by Leon Chua, can be used for in-memory computing. It
maintains its internal conductance state based on the current that has flowed through its
terminals. These passive devices can be used in high-density crossbar arrays (CBAs), which
can perform parallel vector–matrix multiplication with ultra-low energy consumption.
Analog neurons and synapses have been assembled to compute values that rely on current
summation rather than digital Boolean operations [20,22], resulting in some already-built
analog neuromorphic architectures [23,24].
This article explores the concept of KalmanNet by entirely replacing its ANN archi-
tecture with a proposed SNN architecture to assemble biologically plausible neuron and
synapse models. In addition, we propose a new differentiable function for modeling the
encoding/decoding algorithms. The proposed architecture was tested in numerical sim-
ulations using two well-known nonlinear systems, which showed the feasibility of the
solution. At the same time, its possible construction requirements were explored with the
aim of its construction in neuromorphic hardware that would be capable of online learning
in a space- and energy-efficient neuromorphic hardware solution.
This article is structured as follows: In Section 2 (Materials and Methods), neurons,
synapses, and encoding/decoding models are described, and it is shown how these can
be interconnected to create the proposed network solution. Section 3 (Results) shows
numerical simulations with the nonlinear canonical Van der Pol and Lorenz systems used
Sensors 2022, 22, 8845 3 of 16
to test the capabilities of the architecture. Section 4 discusses the results, while Section 5
closes this work by showing our conclusions and proposing future research.
dvm (t)
τm = EL − vm (t) + Rm Isyn (t) (1)
dt
In (1), vm (t) represents the membrane’s potential, EL is the membrane’s potential
at rest, τm = Rm Cm stands for the membrane’s temporal charging constant, Rm is the
membrane’s resistance, and Cm is the membrane’s capacitance. Isyn (t) acts as an excitatory
input current for the neuron, which charges the membrane’s potential vm (t) until it passes
a threshold voltage value vth , at which point a spike is emitted. The spike’s voltage, vs (t),
is shaped as follows:
where t f is the last moment at which a spike was produced, whereas δ(·) ∈ [0, 1] is the
Dirac delta function that models the impulse’s decay alongside the synapses, which decay
from a maximum value vspk at t = t f to zero at the following post-synaptic rate τpstc :
−( τ x )2
δ( x ) = e pstc (3)
Once the spike is produced, vm (t) resets to EL . The neuron will not spike again during
a refractory period τre f , as it does not admit an excitatory input current. When Isyn (t) = 0,
vm (t) → EL .
Given a connection between the j-th and k-th neuron by a synapse with a certain
conductance value w jk (the modeling of which will be reviewed in Section 2.2), the input
current for the postsynaptic neuron will be a function of each spike from the presynaptic
neuron and its propagation through the corresponding synapse. For j presynaptic neurons,
the current Isyn (t) for the k-th neuron is modeled by the following expression:
dIsyn
= − Isyn (t) + Csyn ∑ w jk · vspk · δ(t − t pre )
f
τsyn (4)
dt
f
where t pre is the firing time of each presynaptic neuron. Equations (1) and (4) make up the
conductance-based LIF model [26], where τsyn is the injection current time decay and Csyn
stands for the temporal injection current constant, which models the scale of the current
injection of the presynaptic impulses. Figure 1 shows a step impulse of 1.5nA fed to a
single neuron, which is modeled by Equation (1), showing its internal state vm (t) and the
produced spike voltage vs (t). The parameters used for the neuron that is used are provided
in Table 1.
Sensors 2022, 22, 8845 4 of 16
(a) Neuronal activity for a given impulse (b) Tuning Curve of the Neuron
Isyn(t)[n A]
vm(t)[mV ]
Riobase
vs(t)[mV ]
Figure 1. (a) Membrane voltage vm (t) and spike voltage vs (t) of an LIF neuron for an excitatory
input current of Isyn = 1.5001nA. (b) Tuning curve of the neuron, which shows the riobase value for
the parameters given in Table 1.
As the firing frequency f spk = 1/Tspk , where Tspk = τre f + tspk , we have:
1
f spk (t) =
vth − EL − Rm Isyn (t)
(7)
τre f − τm ln − Rm Isyn (t)
Sensors 2022, 22, 8845 5 of 16
Equation (7) computes the frequency response of a neuron given a certain current. The
inverse function computes the opposite—the amount of current needed for a given frequency:
vth − EL
Isyn (t) = 1
(8)
τre f −
f spk (t)
R m 1 − e τm
Figure 1b shows the firing response of a neuron with respect to the firing frequency
response for a given excitatory input current; this is called a tuning curve, and it was obtained
using Equation (7) (analytical solution) and a numerical simulation of Equation (1), with
a sweep from 0 A to 6 nA, using the neuron parameter values that appeared in Table 1.
Setting f = 1 Hz, we obtain Ir = 1.5 nA. This is called the riobase current of the neuron.
A e− −τ+∆t , ∀∆t ≥ 0
+
∆w(∆t) = (9)
A e τ∆t− , ∀∆t < 0
−
wij = ∑ ∑ ∆w (10)
f f
t pre t post
f f
In Equation (9), ∆t = t post − t pre is the difference between the firing times of the
postsynaptic and presynaptic neurons. τ+ , τ− are the long-term potentiation (LTP) and long-
term depreciation (LTD) constants, which map the decay effect of a spike in the modification
of the weight. For each spike, the synaptic weight is then modified by a learning rate of
A+ , A− . When A+ = A− and τ+ = τ− , the response is symmetrical, that is, the synapse
modifies its value equally for presynaptic or postsynaptic spikes. STDP is included in the
unsupervised learning paradigm [8], as there is no teaching signal involved, rather than the
input and output signals to be processed.
dE E(t)
=− + A+ vspk δ(t − t pre ) + A− vspk δ(t − t post ) (11)
dt τE
The eligibility trace is intended to model the tendency of the change in the synaptic
weight value as a transient memory of all of the spiking activity, where τE depicts its decay
time. The rate of change in the synaptic weights w is then obtained as follows:
dw
= R(t) × E(t) (12)
dt
where R(t) ∈ [−1, 1] is a reward signal, which is defined according to the network’s
objectives. It is worth mentioning that when R = 0, learning is deactivated, as no change in
synapses is produced. When R = −1, the weights are forced to converge in the opposite
direction. Finally, when R = 1, the eligibility trace remains unaltered.
Sensors 2022, 22, 8845 6 of 16
Three presynaptic neurons and one postsynaptic neuron were arranged as shown in
Figure 2a, and they produced different spiking activities (Figure 2b), showing how the
output neuron’s membrane voltage accumulated with each arriving spike (Figure 2d). As
each neuron spiked with a different frequency, the synaptic weight evolved into different
values (Figure 2c).
(a) SNN 3-to-1 configuration (b) Spike Activity for each pre-synaptic neuron
vs(t)[mV ]
SNN
1.51nA w11
w12
vs(t)[mV ]
2nA
w13
3nA
vs(t)[mV ]
LIF neuron
LIF conductance-based
time[s]
R-STDP synapse
(d) Spike Activity for the output neuron
(c) Weight evolution
Isyn(t)[n A]
vm(t)[mV ]
vs(t)[mV ]
Spike voltage
time[s] time[s]
Figure 2. (a) An SNN with three LIF neurons in the input layer and one output layer. (b) Spiking
activity of the first layer. (c) Evolution of the weight of the synapse. (d) Neural activity (input current,
membrane voltage, and spike voltage) of the output neuron.
Encoding Algorithm
There are several encoding and decoding algorithms that have been proposed in
the literature. Some of them have the intention of reflecting biological plausibility, or
easing the construction of neuromorphic devices. Rate-based encoding takes an input signal
x (t) ∈ [ xmin , xmax ] and a minimum and maximum spiking frequency operation of the
neuron F = [ Fmin , Fmax ], and it uses Equations (7) and (8) to encode/decode, respectively.
Nonetheless, the encoding process can be performed as a function of the variability of the
Sensors 2022, 22, 8845 7 of 16
signal, which can be divided into phase encoding and time-to-first-spike encoding, among
others [27,28]. Step-forward encoding, which was described in [29], is a temporal encoding
algorithm that harnesses the low-pass filter dynamics of the LIF neuron in conjunction
with a temporal encoding methodology. The input signal x (t) is compared with an initial
baseline signal xb (t) and a sensibility encoding threshold value xth . If x (t) > xb + xth ,
a certain current Isyn+ is fed into an LIF neuron, which is denoted as N + . However, if
− is then fed into another LIF neuron (denoted as N − ).
x (t) < xb − xth , a fixed current Isyn
Therefore, N will only spike for a growing signal, while N − will spike for decreasing
+
signals. In this work, the conditional part of this encoding algorithm is replaced with
differentiable functions with the aim of easing future mathematical convergence analyses.
Setting α = tanh(c · ( x (t) − xb (t)),
+
Isyn (t) = Ir (1 + α) (13)
−
Isyn = Ir (1 − α) (14)
where c is a slope modulation constant, which, for high values, approximates tanh(·)
function as closely as the hardlim function. The baseline signal for the encoding is then
updated by:
−
x̂ (t) = x̂ (t − 1) + xth δ(t − t+
f ) − xth δ ( t − t f ) (16)
−
where t+ +
f stands for the spiking time of the N neuron and t f is the firing time of the N
−
neuron. Figure 3a shows a simple configuration for reconstructing an input sine signal,
which is shown in Figure 3b, by using the spiking activities of two neurons (Figure 3c) that
are fed by an encoding block composed of Equations (13) and (14), which feed N + and N −
with the current levels shown in Figure 3d.
x k = f ( x k −1 , u k ) + w k (17)
yk = h( xk ) + vk (18)
where xk ∈ Rn is the state vector of the system, and f (·) is nonlinear and describes the
evolution of the dynamics given the state value at the previous timestep xk−1 and a control
input uk ∈ Rn . yk ∈ Rm is the available output of the system, which is described by h(·).
w ∼ N(0, Q) and v ∼ N(0, R) are additive white Gaussian noise (AWGN) with a covariance
matrix Q ∈ Rn×n and R ∈ Rm×m , respectively, representing the system uncertainties given
by perturbations or noisy measurements. The EKF algorithm retrieves an estimation x̂k|k
that ideally tends to x̂k|k → xk . As f (·), h(·) are nonlinear, the EKF uses a linearized version
of the system’s model by obtaining their respective Jacobians:
∂f
A= | x̂ ,u (19)
∂x k−1|k−1 k
∂h
C= | x̂ (20)
∂x k|k−1
Sensors 2022, 22, 8845 8 of 16
where x̂k−1|k−1 is the estimation of the EKF in the previous timestep. The discrete EKF is a
two-step procedure involving a prediction and an update:
1. Prediction: First, a preliminary estimation x̂k|k−1 , ŷk|k−1 is computed by:
2. Update: The second step consists of computing the Kalman gain matrix κ ∈ Rn×m with
in which the difference between the measurable output and estimated output of the
prediction step is used:
+
Isyn tf+
x(t) ̂
x(t)
Encoder Decoder
−
Isyn tf−
LIF neuron
(c) Input current for both neurons (d) Spiking activity for both neurons
+
Isyn
(t)[n A]
vs(t)[V ]
Isyn
+
vs(t)[V ]
Isyn
−
(t)[n A]
−
Isyn
Figure 3. Signal reconstruction using neurons and encoding/decoding algorithms. (a) Assembly of
the encoding/decoding, which alternate the input currents of two different neurons. (b) Comparison
between the original signal x (t) and reconstructed signal x̂ (t). (c) Spiking activity response for each
+ , I − for the neurons (Blue) versus the riobase (red dotted). (d) Output
neuron. (d) Input currents Isyn syn
spikes for each neuron in the assembly.
x(0)
+
+
t =0
t >0
Δxk̂
− SNN
z −1 ̂
xk−1|k−1
f ̂
xk|k−1
h
̂
yk|k−1 κ
−
+
̂
xk|k
×
Δyk
yk
+
Figure 4. Block diagram of the Kalman filter in which the typical Kalman gain-obtaining procedure
is replaced by an SNN.
Sensors 2022, 22, 8845 10 of 16
SNN
Ens+
..
Δx1̂ .
Encoding ..
.
Δx2̂ κ1,1
Encoding Decoding
. ..
. . κ2,1
. Decoding
Δxn̂ .
Encoding ..
κn,m
Δy1 Decoding
Encoding
Ens−
.
.
.
Δym
Encoding ..
.
..
.
..
.
3. Results
In order to show the performance of the proposal, two nonlinear systems were used.
For each system, the nonlinear equations were simulated to create noiseless ground-truth
data x (t). Then, the resulting vector was noised as described in (17) and (18) by setting
wk , vk with the diagonal covariance matrices Q, R as follows:
q2
Q = I · q2 , R = I · r2 , ν= (31)
r2
where ν = 1 would imply that the state noise and the observation noise have the same
variance, i.e., q2 = r2 . The resulting contaminated data then corresponded to a system
with noisy measurements and unknown perturbations. The simulation was intended
to compare the performance of a standard EKF against the SNN proposal under equal
conditions; that is, only noisy measurements were provided. The SNN had to be able to
recover this information, while for the EKF, Q, R were set as identity matrices, as these
were supposed to be unknown.
To create the system’s synthetic data, as the used models were shaped with ẋ =
A( x, u) · x, the solution of the nonlinear system could be expressed as a Taylor series
expansion with five terms, as in [7], assuming that for a small timestep ∆t, f ( x (t)) ≈
f ( x (t + ∆t)). By doing this, we obtained a system that shaped as described in Equation (17).
For the SNN, the neuron parameters in Table 1 were used. The synapse, encod-
ing/decoding, and simulation parameters are found in Table 2. The synapses were ran-
Sensors 2022, 22, 8845 11 of 16
domly initialized in the range of [wmin , wmax ]. To display the neural activity, the observed
spike frequency for each neuron f obs was computed as follows:
1
f obs = (n T ) (32)
Tobs
where nobs counts how many spikes were produced inside a period of length Tobs = 50 ms.
The procedure was repeated for the whole simulation timeline of t = 60 s, with simulation
a timestep of ∆t = 1 × 10−4 s.
The simulation scripts were coded from scratch using Python (v+3.8) [30] and the
Numpy (v+1.20) and Sympy (v+1.8) [31] libraries. However, during our testing, the Lorenz
system’s SNN network was also coded using the SNNtorch (v+0.5.3) library [32]. The
resulting code is available in the Data Availability Statement section.
where µ = 3 refers to the damping strength of the oscillations. For this test, we set our
output to y = [1, 0] x, that is, only x1 was available for the measurement, while x2 was set
to be recovered from the system.
Figure 6a shows a correct estimation of x2 . This can also be seen in the difference x − x̂
shown in Figure 6b. The κ ∈ R2×1 matrix values estimated by the SNN are displayed in
Figure 6c; these were obtained by using the spikes of the output layer. The evolution of the
synaptic weight is also shown for both ensembles (Ens+, Ens−) in Figure 6d,e, respectively.
While the SNN’s estimation became noisier as the time moved forward, it can be seen in
Figure 6f that the EKF was not able to properly reconstruct the missing states at any point.
Sensors 2022, 22, 8845 12 of 16
(a) (c)
x1
(d)
x2
(e)
(b)
(f)
time[s]
time[s]
Figure 6. Time evolution of the reconstruction of the Van der Pol oscillator using the proposed
architecture in comparison with the ground truth. (a) Comparison of the ground truth x (blue)
with the of the reconstruction x̂ (orange) made using the proposed architecture. (b) Time error
reconstruction x − x̂ of the two states of the system. (c) Time evolution of each value of the resulting
Kalman gain matrix. (d,e) Weight value evolution over time of the 3 × 2 synapse set (multiple colors)
for Ens+ and Ens−, respectively. (f) Time error state estimation of the Van der Pol system using the
standard discrete EKF algorithm without knowledge of the covariance matrices Q, R.
−10
10 0 x1
ẋ = 28 −1 − x1 x2 (34)
0 x1 − 83 x3
For this system, the EKF can be implemented by using five Taylor series approximation
terms, as in [7]. In this test, we set the output to y = [1, 0, 0] x, which meant that only the x1
state was available for measurement. Therefore, x2 , x3 should be recovered.
Figure 7a shows the estimation of x2 , x3 . The error x − x̂ is shown in Figure 7b for the
three states. The κ ∈ R3×1 matrix values estimated by the SNN are displayed in Figure 7c.
The weight evolution is also shown for both ensembles (Ens+, Ens−) in Figure 7d,e,
respectively. In this test, while the error estimation converged to close to zero for the three
states (Figure 7b), Figure 7f shows that the EKF quickly diverged to infinity at t = 6.9 s due
to the missing noise characterization of the system.
Sensors 2022, 22, 8845 13 of 16
(a) (c)
x1
(d)
x2
(e)
x3
time[s] time[s]
Figure 7. Time evolution of the Lorenz system’s reconstruction when using the proposed architecture
in comparison with the ground truth. (a) Comparison of the ground truth x (blue) with the recon-
struction x̂ (orange) made by using the proposed SNN architecture. (b) Time error reconstruction
x − x̂ of the three states of the Lorenz system. (c) Evolution of each value of the Kalman gain matrix.
(d,e) Weight value evolution over time of the 4 × 3 synapse set (multiple colors) for Ens+ and Ens−,
respectively. (f) Time estimation of the Lorenz system using the standard discrete EKF algorithm
without knowledge of the covariance matrices Q, R.
4. Discussion
A proper full state reconstruction of the space state was achieved. However, some
considerations should be addressed. On the one hand, in the KalmanNet structure, the
intention being the usage of GRUs is to use them as storage for the internal ANN’s memory
in order to jointly track the underlying second-order statistical moments required for implicitly
computing the KG [7]. In our SNN proposal, the intention is to replace them with the eligi-
bility traces defined by the RSTDP weight update mechanism (Equation (11)), as E collects
the weight changes proposed by STDP; thus, they represent the potentiation/degradation
tendency of the synaptic weight [8].
The energy consumption of an SNN relies on the spiking activity. Therefore, only
the necessary spikes should be performed to represent our signals. Rate-based encoding
mechanisms return a constant excitatory input current for a constant input signal (no
matter its magnitude), resulting in spiking activity for non-changing signals. In temporal
encoding schemes, such as the one used in this work, the neurons are only excited based
on the rate of change in the input signal. The introduction of Equations (13) and (14) is
intended to restrain the excitatory input current of the neurons to minimum and maximum
Sensors 2022, 22, 8845 14 of 16
values. In the range of tanh(·) ∈ [−1, 1], for high rates of change, the maximum input
current is Isyn = 2Ir ; according to Equation (7) and the neuron parameters in Table 1, this
would correspond to a spike frequency of f ≈ 120 Hz for a maximum input current of
Isyn = 2(1.5nA) = 3nA. This can be seen in the resulting spike frequency graphs for both
the Van der Pol and Lorenz tests (see the Data Availability section).
However, while the neuron parameters were selected to resemble biological plausibil-
ity, proper selection of the encoding/decoding sensibility and the values of the learning
rates is fundamental. xth should proportionate enough to Isyn to produce a suitable spiking
activity, though selecting sufficiently high A+ , A− values should appropriately modify the
synaptic weights with the supplied spikes. Low learning rates may require a higher spike
frequency but a higher precision, leading to slow convergence. In contrast, high learning
rate values require less spiking activity but lead to a lower precision, which may result in
divergence. In addition, to translate this SNN structure into a hardware implementation,
the min/max synaptic weight values might be restricted to the observed values in available
memristive devices.
A mathematical convergence analysis would determine the boundary conditions for
selecting proper parameters. However, the LIF reset condition makes this dynamic non-
differentiable, which disables this analysis or the adaptation of back-propagation for SNNs.
A way to deal with this is to move the analysis to the frequency domain by solving the
LIF model and obtaining the tuning curve produced by Equation (7) and its corresponding
graph (Figure 1b). It can be seen that the function only is differentiable in the range of
[ Ir , ∞). In [15], the authors used a polynomial differentiable tuning curve (which can
be obtained through least square regression) to avoid this restriction. In this work, the
introduction of bounded and differentiable encoding/decoding functions and the usage of
two (Ens+ , Ens− ) neuron ensembles allowed the usage of this approximation to be avoided,
as the dynamics of Ens+ are only affected by the growth of input signals, while for Ens− ,
only the decay is processed, thus creating a switching dynamical system [33] that might
allow us to propose a Lyapunov candidate function whose derivative is negatively defined.
Author Contributions: Conceptualization, A.J.-L., L.M.G.-S., V.H.P.-P., E.R.-E., H.M.-L. and H.S.;
methodology, A.J.-L. and L.M.G.-S.; software, A.J.-L. and L.M.G.-S.; validation, A.J.-L., L.M.G.-S.,
V.H.P.-P. and H.S.; formal analysis, A.J.-L., L.M.G.-S. and E.R.-E.; investigation, A.J.-L. and H.M.-L.; re-
sources, V.H.P.-P., E.R.-E., H.M.-L. and H.S.; data curation, A.J.-L.; writing—original draft preparation,
A.J.-L.; writing—review and editing, A.J.-L., L.M.G.-S., V.H.P.-P. and H.S.; visualization, A.J.-L. and
L.M.G.-S.; supervision, V.H.P.-P.; project administration, H.S.; funding acquisition, V.H.P.-P., E.R.-E.,
H.M.-L. and H.S. All authors have read and agreed to the published version of the manuscript.
Funding: The authors are thankful for the financial support of the projects to the Secretería de
Investigación y Posgrado del Instituto Politécnico Nacional with grant numbers: 20221780, 20220268,
20221490, and 20220226, as well as the support from Comisión de Operación y Fomento de Activi-
dades Académicas and Consejo Nacional de Ciencia y Tecnología (CONACYT).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: All of the scripts used in this article are available on the following Github
page: https://github.com/LuisGarcia-S/SNN-Kalman-Filtering (accessed on 14 November 2022).
Acknowledgments: The authors are thankful for the support provided by Instituto Politécnico
Nacional, Secretaría de Investigación y Posgrado, Comisión de Operación y Fomento de Actividades
Académicas, and CONACYT-México for the support to carry out this research.
Conflicts of Interest: The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
References
1. Brunton, S.L.; Kutz, J.N. Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd ed.; Cambridge
University Press: Cambridge, UK, 2022. [CrossRef]
2. Kaiser, E.; Kutz, J.N.; Brunton, S.L. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit.
Proc. R. Soc. A Math. Phys. Eng. Sci. 2018, 474, 20180335. [CrossRef] [PubMed]
3. Kaheman, K.; Kutz, J.N.; Brunton, S.L. SINDy-PI: A robust algorithm for parallel implicit sparse identification of nonlinear
dynamics. Proc. R. Soc. A Math. Phys. Eng. Sci. 2020, 476, 20200279. [CrossRef] [PubMed]
4. Teng, Q.; Zhang, L. Data driven nonlinear dynamical systems identification using multi-step CLDNN. AIP Adv. 2019, 9, 085311.
[CrossRef]
5. Kálmán, R.E.; Bucy, R.S. New Results in Linear Filtering and Prediction Theory. J. Basic Eng. 1961, 83, 95–108. [CrossRef]
6. Haykin, S. (Ed.) Kalman Filtering and Neural Networks; John Wiley & Sons, Inc.: New York, NY, USA, 2001.
Sensors 2022, 22, 8845 16 of 16
7. Revach, G.; Shlezinger, N.; Ni, X.; Escoriza, A.L.; van Sloun, R.J.G.; Eldar, Y.C. KalmanNet: Neural Network Aided Kalman
Filtering for Partially Known Dynamics. IEEE Trans. Signal Process. 2022, 70, 1532–1547. [CrossRef]
8. Bing, Z.; Jiang, Z.; Cheng, L.; Cai, C.; Huang, K.; Knoll, A. End to End Learning of a Multi-Layered Snn Based on R-Stdp for a
Target Tracking Snake-Like Robot. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA),
Montreal, AB, Canada, 20–24 May 2019; pp. 9645–9651. [CrossRef]
9. Thompson, N.C.; Greenewald, K.H.; Lee, K.; Manso, G.F. The Computational Limits of Deep Learning. arXiv 2020,
arXiv:2007.05558.
10. Sandamirskaya, Y. Rethinking computing hardware for robots. Sci. Robot. 2022, 7, eabq3909. [CrossRef]
11. Tavanaei, A.; Ghodrati, M.; Reza Kheradpisheh, S.; Masquelier, T.; Maida, A. Deep learning in spiking neural networks. Neural
Netw. 2019, 111, 47–63. [CrossRef]
12. Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms
and applications. Nat. Comput. Sci. 2022, 2, 10–19. [CrossRef]
13. Kendall, J.D.; Kumar, S. The building blocks of a brain-inspired computer. Applied Physics Reviews 2020, 7, 011305. [CrossRef]
14. Zaidel, Y.; Shalumov, A.; Volinski, A.; Supic, L.; Ezra Tsur, E. Neuromorphic NEF-Based Inverse Kinematics and PID Control.
Front. Neurorobotics 2021, 15, 631159. [CrossRef] [PubMed]
15. Volinski, A.; Zaidel, Y.; Shalumov, A.; DeWolf, T.; Supic, L.; Ezra-Tsur, E. Data-driven artificial and spiking neural networks for
inverse kinematics in neurorobotics. Patterns 2022, 3, 100391. [CrossRef] [PubMed]
16. Davies, M.; Wild, A.; Orchard, G.; Sandamirskaya, Y.; Guerra, G.A.F.; Joshi, P.; Plank, P.; Risbud, S.R. Advancing Neuromorphic
Computing With Loihi: A Survey of Results and Outlook. Proc. IEEE 2021, 109, 911–934. [CrossRef]
17. Modha, D.S. The Brain’s Architecture, Efficiency on a Chip. 2016. Available online: https://www.ibm.com/blogs/research/2016
/12/the-brains-architecture-efficiency-on-a-chip/ (accessed on 12 October 2022).
18. Modha, D.S. Products–Akida Neural Processor SoC. 2022. Available online: https://brainchip.com/akida-neural-processor-soc/
(accessed on 12 October 2022).
19. Sandamirskaya, Y.; Kaboli, M.; Conradt, J.; Celikel, T. Neuromorphic computing hardware and neural architectures for robotics.
Sci. Robot. 2022, 7, eabl8419. [CrossRef] [PubMed]
20. Li, Y.; Ang, K.W. Hardware Implementation of Neuromorphic Computing Using Large-Scale Memristor Crossbar Arrays. Adv.
Intell. Syst. 2021, 3, 2000137. [CrossRef]
21. Zhang, X.; Lu, J.; Wang, Z.; Wang, R.; Wei, J.; Shi, T.; Dou, C.; Wu, Z.; Zhu, J.; Shang, D.; et al. Hybrid memristor-CMOS neurons
for in-situ learning in fully hardware memristive spiking neural networks. Sci. Bull. 2021, 66, 1624–1633. [CrossRef]
22. Payvand, M.; Moro, F.; Nomura, K.; Dalgaty, T.; Vianello, E.; Nishi, Y.; Indiveri, G. Self-organization of an inhomogeneous
memristive hardware for sequence learning. Nat. Commun. 2022, 13, 1–12. [CrossRef]
23. Kimura, M.; Shibayama, Y.; Nakashima, Y. Neuromorphic chip integrated with a large-scale integration circuit and amorphous-
metal-oxide semiconductor thin-fil msynapse devices. Sci. Rep. 2022, 12, 5359. [CrossRef]
24. Kim, H.; Mahmoodi, M.R.; Nili, H.; Strukov, D.B. 4K-memristor analog-grade passive crossbar circuit. Nat. Commun. 2021, 12.
[CrossRef]
25. Gerstner, W.; Kistler, W.M.; Naud, R.; Paninski, L. Neuronal Dynamics; Cambridge University Press: Cambridge, UK, 2014.
[CrossRef]
26. Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking
Neural Networks. Front. Neurorobotics 2018, 12, 35. [CrossRef]
27. Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for
Spiking Neural Networks. Neural Comput. 2022, 34, 1289–1328. [CrossRef] [PubMed]
28. Guo, W.; Fouda, M.E.; Eltawil, A.M.; Salama, K.N. Neural Coding in Spiking Neural Networks: A Comparative Study for Robust
Neuromorphic Systems. Front. Neurosci. 2021, 15, 638474. [CrossRef] [PubMed]
29. Juarez-Lora, A.; Ponce-Ponce, V.H.; Sossa, H.; Rubio-Espino, E. R-STDP Spiking Neural Network Architecture for Motion Control
on a Changing Friction Joint Robotic Arm. Front. Neurorobotics 2022, 16, 904017. [CrossRef] [PubMed]
30. Harris, C.R.; Millman, K.J.; Van Der Walt, S.J.; Gommers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.;
Smith, N.J.; et al. Array programming with NumPy. Nature 2020, 585, 357–362. [CrossRef] [PubMed]
31. Meurer, A.; Smith, C.P.; Paprocki, M.; Čertík, O.; Kirpichev, S.B.; Rocklin, M.; Kumar, A.; Ivanov, S.; Moore, J.K.; Singh, S.; et al.
SymPy: symbolic computing in Python. PeerJ Comput. Sci. 2017, 3, e103. [CrossRef]
32. Eshraghian, J.K.; Ward, M.; Neftci, E.; Wang, X.; Lenz, G.; Dwivedi, G.; Bennamoun, M.; Jeong, D.S.; Lu, W.D. Training spiking
neural networks using lessons from deep learning. arXiv 2021, arXiv:2109.12894.
33. Saito, T. Piecewise linear switched dynamical systems: A review. Nonlinear Theory Its Appl. IEICE 2020, 11, 373–390. [CrossRef]