Exploring Neuromorphic Computing Based On Spiking Neural Networks: Algorithms To Hardware
Exploring Neuromorphic Computing Based On Spiking Neural Networks: Algorithms To Hardware
NITIN RATHI, INDRANIL CHAKRABORTY, and ADARSH KOSTA, School of Electrical and
Computer Engineering, Purdue University, USA
ABHRONIL SENGUPTA, School of Electrical Engineering and Computer Science, Pennsylvania State
University, USA
AAYUSH ANKIT, School of Electrical and Computer Engineering, Purdue University, USA
PRIYADARSHINI PANDA, Electrical Engineering, Yale University, USA
KAUSHIK ROY, School of Electrical and Computer Engineering, Purdue University, USA
243
Neuromorphic Computing, a concept pioneered in the late 1980s, is receiving a lot of attention lately due to
its promise of reducing the computational energy, latency, as well as learning complexity in artificial neural
networks. Taking inspiration from neuroscience, this interdisciplinary field performs a multi-stack optimiza-
tion across devices, circuits, and algorithms by providing an end-to-end approach to achieving brain-like
efficiency in machine intelligence. On one side, neuromorphic computing introduces a new algorithmic para-
digm, known as Spiking Neural Networks (SNNs), which is a significant shift from standard deep learning and
transmits information as spikes (“1” or “0”) rather than analog values. This has opened up novel algorithmic
research directions to formulate methods to represent data in spike-trains, develop neuron models that can
process information over time, design learning algorithms for event-driven dynamical systems, and engineer
network architectures amenable to sparse, asynchronous, event-driven computing to achieve lower power
consumption. On the other side, a parallel research thrust focuses on development of efficient computing
platforms for new algorithms. Standard accelerators that are amenable to deep learning workloads are not
particularly suitable to handle processing across multiple timesteps efficiently. To that effect, researchers have
designed neuromorphic hardware that rely on event-driven sparse computations as well as efficient matrix
operations. While most large-scale neuromorphic systems have been explored based on CMOS technology,
recently, Non-Volatile Memory (NVM) technologies show promise toward implementing bio-mimetic func-
tionalities on single devices. In this article, we outline several strides that neuromorphic computing based on
spiking neural networks (SNNs) has taken over the recent past, and we present our outlook on the challenges
that this field needs to overcome to make the bio-plausibility route a successful one.
CCS Concepts: • Theory of computation → Design and analysis of algorithms; • Computing method-
ologies → Bio-inspired approaches;
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:2 N. Rathi et al.
Additional Key Words and Phrases: Neuromorphic Computing, Spiking Neural Networks, bio-plausible learn-
ing, spike-based backpropagation, event cameras, In-Memory Computing, Non-Volatile Memories, asynchro-
nous communication
ACM Reference format:
Nitin Rathi, Indranil Chakraborty, Adarsh Kosta, Abhronil Sengupta, Aayush Ankit, Priyadarshini Panda,
and Kaushik Roy. 2023. Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms
to Hardware. ACM Comput. Surv. 55, 12, Article 243 (March 2023), 49 pages.
https://doi.org/10.1145/3571155
1 INTRODUCTION
In the seminal book “The Computer and the Brain” [241], John von Neumann discussed how the
brain can be viewed as a computing machine. Since then, there have been multitude of works
trying to perform brain-like functions with brain-like architectures. Neural networks, specifically
Deep Learning, have powered the current era of ubiquitous artificial intelligence, demonstrating
unprecedented success, even surpassing humans in several cognitive tasks [116, 214]. But at what
cost?
While our fastest parallel computers have enabled deep learning, they are primarily limited by
their ability to move data between the compute and memory, which is in stark contrast to the mas-
sively parallel, sparse, event-driven, distributed processing capabilities of the brain. Consequently,
there is a substantial energy-efficiency gap between the brain and deep learning architectures.
With increasing complexity of tasks in the era of in-sensor analytics and Internet of Things (IoT)
and with the ever-growing size of networks deployed for such tasks, implementing and training
such deep neural networks in edge devices with constrained power, energy and computational
budgets have become a daunting task [257].
The underlying computational paradigm for Neuromorphic Computing is an emerging disci-
pline of artificial neural networks that attempts to mimic neuronal and synaptic functionalities tem-
porally and in a distributed fashion based on neuron “spikes” or firing events in the brain [47, 70].
Termed as Spiking Neural Networks (SNNs) [133], these networks lead to possibilities of sparse,
event-driven neuronal computations and temporal encoding–a shift from standard deep learning
networks, termed as Analog Neural Networks (ANNs), which process and transmit logically
analog information rather than all-or-nothing spikes.
Owing to the unique features of SNNs, there is a need to explore new algorithmic directions that
are more amenable to its implicit recurrence, event-driven, and sparse nature of computing. SNNs,
by their design, are dynamical recurrent systems; the internal state of the spiking neuron integrates
temporal information and maintains a history of previous inputs. Training recurrent networks
with binary signals exacerbates the issue of exploding and vanishing gradients. While the event-
driven nature of SNNs offers a promising route for achieving lower energy and power consumption
for intelligent hardware, it also poses a critical limitation on their learning capability. Integrating
temporally encoded statistics of spiking neurons/synapses with standard gradient-descent-based
learning algorithms (catered for ANNs that do not encode information in time) presents several
challenges [123, 265]. It is difficult to train the layers of a deep SNN architecture globally in an
end-to-end manner [181]. Bio-plausible unsupervised [52] and supervised [99] learning have been
explored as a viable solution that allows localized learning, and has proven to be computationally
more efficient than backpropagation-based algorithms. Although SNNs trained with unsupervised
learning algorithms are not yet suitable for challenging cognitive tasks, they find applications in
clustering and extracting low-level features from images for recognition. There has also been a
significant thrust towards developing scalable gradient-based algorithms that can be adapted to
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:3
event-driven, sparse activity in SNNs [16, 166]. Overall, these algorithmic drives promises to scale
up the performance of SNNs to levels currently offered by ANNs, while preserving the benefits
of sparse event-based computations. Also, current algorithms do not take full advantage of the
quintessential “time” parameter, future research can explore these opportunities along with imple-
menting additional bio-plausible functions to further the performance of SNNs.
A parallel research thrust focuses on the development of efficient computing platforms for
the evolving SNN algorithms and workloads. Hardware for SNN workloads derives certain
motivation from the broad field of Neuromorphic Engineering. The concept of Neuromorphic
Engineering was first proposed by Carver Mead in his seminal book [144, 145], where he explored
the notion of analog circuits to mimic complex neuronal and synaptic functionalities in the brain.
This was later demonstrated in groundbreaking work of implementing a Silicon Neuron [139]
and then a Silicon Retina by Mahowald et al. [138]. Various neuromorphic chips have hence been
implemented such as ROLLS [187], Dynap-SEL [154], Neurogrid [21], SpiNNaker [171] and so
on. These implementations target emulation of the biophysics of neurons and synapses of the
brain through CMOS circuits. SNN workloads tend to use simpler neuro-synaptic models, and can
potentially take inspiration from aforementioned works that have delved deeper into emulating
brain-like characteristics on chip. Besides neuro-synaptic models, another key component of
SNN hardware is acceleration of the computations in SNN workloads. The requirements of
such SNN accelerators have evolved significantly over the last few years, particularly toward
designing large-scale systems effectively leveraging key features of SNN algorithms. The accel-
erators targeted toward executing standard ANN workloads more efficiently such as General
Purpose Graphics Processing Units (GPGPUs) and Tensor Processing Units (TPUs) [98]
are not designed to optimize processing across multiple timesteps. Neuromorphic hardware
architectures draw inspiration from two basic principles of SNNs: (i) event-driven sparse com-
putations and (ii) efficient and parallel matrix operations. While ANN accelerators are designed
to efficiently perform matrix operations, they fail to leverage temporal sparsity in SNNs. The
temporal sparsity has inspired various architectures such as T rueNorth [6] and Loihi [46], which
deploy asynchronous computing systems to reduce compute and communication of data. The
second feature is more generic to neural networks, and have overseen significant developments
of domain-specific accelerators [5, 163] with intertwined memory and compute elements to
overcome the von-Neumann bottleneck. While most large-scale SNN accelerators are based
on CMOS technology, several Non-Volatile Memory (NVM) technologies have emerged as a
promising candidate for building bio-mimetic devices [31, 37, 269]. Such devices can emulate neu-
ronal [172, 206, 234] and synaptic functionalities [28, 84, 97, 111, 185, 224, 226] at a one-to-one level
while simultaneously enabling a novel paradigm of “In-Memory” computing, i.e., in situ synaptic
computations [262] for acceleration of SNN workloads [11, 216]. Thus, there is a need to co-design
neuromorphic algorithms and the underlying computing architectures in a multi-disciplinary
research [88].
This article outlines recent developments in the domain of Spiking Neural Networks under the
umbrella of neuromorphic engineering in terms of driving research directions in algorithms as
well as hardware. Whether we will be able to achieve the much eluded energy-efficiency in ma-
chine intelligence platforms is a question that is difficult to answer currently. However, we be-
lieve a rethinking of brain-inspired computing with a unified hardware-software perspective is
essential to enable computationally efficient learning. Combining outlooks from varying fields—
computational neuroscience, machine learning, materials, devices, circuits, and architectures—will
help outline the challenges posed by these different outlooks and provide a future direction for in-
telligent platforms with power and energy-efficiency akin to the brain.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:4 N. Rathi et al.
2 ALGORITHMS
In this section, we delve into the algorithmic foundations for SNNs. Significant developments in
neuron modeling, input encoding, learning algorithms, and network architectures have shaped
the progress of neuromorphic computing. Due to the representation of inputs with time domain
information, considerable rethinking of the training algorithms, currently in place for ANNs, is
required.
2.1.1 LIF/IF Neuron Model. The dynamics of the IF/LIF model is described by the differential
equation
du (t )
τ = −[u (t ) − ur est ] + RI (t ), (1)
dt
where u is the internal state known as the
membrane potential, ur est is the resting value
of the membrane potential, R(I ) is the input
resistance (current), and τ is the time con-
stant [69]. The equation represents the behav-
ior of the neuron when the membrane poten-
tial (u) is below the threshold potential (v). The
membrane potential integrates the input cur-
rent over time and the neuron fires when the
potential crosses the threshold voltage. IF/LIF
is a very simple model and does not reflect the Fig. 1. Dynamics of a leaky-integrate-and-fire (LIF)
overall complex dynamics of a biological neu- model in response to input spikes.
ron. However, its simplicity makes it attractive
to optimize in deep learning frameworks. For LIF neuron, the integration is leaky and the mem-
brane potential decays over time in the absence of input stimuli (Figure 1). The firing event is
sometimes followed by a refractory period during which the membrane potential does not inte-
grate the input current. This avoids excessive firing of a particular neuron and allows other neurons
to participate in the learning [52]. The membrane potential is reset after firing and returns to its
rest value after the refractory period (Figure 1).
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:5
The continuous time domain differential equation (Equation (1)) is solved to get an iterative
update rule [254] where τ is mapped to λ through integration as
uit = λuit −1 + w i j otj − voit −1 , (2)
j
⎧
⎪ 1, if uit −1 > v
oit −1 = ⎨
⎪ 0, otherwise , (3)
⎩
where u is the membrane potential, subscript i and j represent the post- and pre-neuron, respec-
tively, superscript t is the timestep, λ is a constant (<1) responsible for the leak in membrane
potential, w is the weight connecting the pre- and post-neuron, o is the output spike, and v is
the threshold potential. The second term in Equation (2) integrates the inputs and the last term
resets the membrane potential after firing. The reset mechanism reduces the membrane potential
by the threshold value instead of resetting it to the reset potential. This reduces information
loss and leads to better performance in image classification tasks [77]. Most of the hardware
implementations of LIF models employ the reset to ground, however, few of them provide the
functionality of reset by subtraction [4].
2.1.2 Stochastic Neuron Model. The deterministic IF/LIF neuron model, discussed above, emits
a spike as soon the membrane potential crosses the threshold voltage. If the membrane potential
is below the threshold at any timestep, then the computation in next timestep begins with the
previous membrane potential. In contrast, a probabilistic neuron model, fires stochastically and the
probability of firing at a particular time is a non-linear function of the instantaneous magnitude
of the weighted input [19, 206, 242]. The probability of firing of a post-neuron is defined as
1
P (oi = 1) =
− j wi j o j
, (4)
1+e
where o j is the spike input (1 or 0) from pre-neurons and w i j is the synaptic weight connecting pre-
and post-neuron. In the absence of any input activity and bias ( j o j = 0) the firing probability is
0.5. A network with stochastic neurons is evaluated over multiple iterations and the output of a
layer is computed as the average number of spikes over all iterations. The average analog value
can then be rate-coded (more on this in Section 2.2.1) as a Poisson spike train to act as input to the
next layer. These stochastic models are difficult to implement in hardware.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:6 N. Rathi et al.
Fig. 2. Different input encoding methods for SNNs. In rate coding, the information is represented in the
average number of spikes in a given duration. In Poisson rate coding, at every timestep (t ) a random number
[0, 1] is generated for each neuron and compared with the corresponding pixel value; the neuron fires if the
pixel value is greater than the random number. In Temporal Switch Coding, a form of temporal coding, the
time difference between two spikes (ts+ and ts− ) encodes the input data; p ∗ is proportional to the value being
encoded. Event cameras capture the asynchronous changes in log intensity as discrete spikes over time [158].
representation of the analog value by spike-train. This leads to adopting a large number of
timesteps for high accuracy at the expense of high inference latency [209]. Rate coding is inef-
ficient due to minimal information content in each spike.
2.2.2 Temporal Coding. Unlike rate coding, in temporal coding, the timing of individual spikes
is crucial, as the information is encoded in the timing instances. The Logarithmic Temporal Cod-
ing [267] encodes the analog value as a binary number with a predefined number of bits. The
number of bits serves as a proxy for time and a spike is generated for each active bit in the binary
number. For sparser representations, the spike is generated only for the most significant bit. Rank
Order Coding [48] represents the information as the order of firing instances instead of using the
precise timing of the spike. The input neurons encoding larger analog values fire earlier compared
to neurons encoding smaller values. Time-to-First-Spike [156, 198], a form of Rank Order Coding,
restricts each neuron to spike only once. The order or time of spike is inversely proportional to the
analog value being encoded. In Temporal Switch Coding [76], the analog value is encoded using
two spikes, and the time difference between the spikes is proportional to the encoded value. It
achieves better energy-efficiency, since at most two memory accesses and two addition computa-
tions are performed for each synapse. Although temporal coding methods can encode information
with less number of spikes, the lack of appropriate training algorithms for temporally coded SNNs
results in sub-optimal performance compared to rate-coded SNNs [198].
2.2.3 Encoding Layer. Rate and temporal coding, discussed earlier, are fixed formula-based, non-
parameterized coding methods. Alternatively, the input encoding can be made part of the training
process and, therefore, the encoding function can have parameters that are trained with the input
data. The encoding function is modelled as a neural network that receives the analog values and
generates a spike train. In some cases, this network is as simple as one convolution layer [132,
193, 199]. The encoding layer is appended to the front of the SNN and trained end-to-end with
the entire SNN. The encoding layer consists of IF/LIF neuron that integrates the weighted analog
values and generates a spike-train. The input to the encoding layer at every timestep is the same
analog value.
2.2.4 Event-based Sensors. All the above discussed encoding methods are designed with the aim
to encode the original image frames captured by standard cameras into pixel-wise temporal spikes
to construct the inputs for the SNN. However, since the original input had no time information
to begin with, this spatio-temporal representation is not very well justified. Standard cameras
that are typically used to capture image as well as video information fall victim to a variety of
drawbacks for real-world and edge-applications. Their low and fixed sampling rate makes them
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:7
prone to motion blur when capturing high-speed motion. Their low dynamic range of operation
renders them unable to capture meaningful information in challenging lighting conditions such
as low light and high dynamic range (HDR) environments. In addition, sampling the entire
image frame at regular intervals makes them capture redundant information over and over in
nearly static scenes leading to a high power consumption. These shortcomings motivate the need
for specialized sensing modalities for efficient operation in challenging real-world environments
while being significantly energy-efficient.
Event-based sensors (such as DVS128 [129], DAVIS240 [29], Samsung DVS [217], etc.) aim to
address these concerns by offering asynchronous sensing of change in visual information from the
environment. Also known as bio-inspired silicon retinas, event cameras detect the log-intensity (I )
changes at each pixel element asynchronously and independently and generate a spike event if
the change exceeds a threshold (C):
log(It +1 ) − log(It ) ≥ C. (5)
These cameras employ a threshold-variation-insensitive algorithm over the standard asynchro-
nous sigma-delta modulation when handling the input intensity signal. This results in idle output
for no change in input intensity. Since only the log intensity changes are monitored and recorded,
a very low power consumption can be achieved. Due to the fundamentally different working prin-
ciple compared to standard cameras, event-cameras provide exceptionally high temporal resolu-
tion (10μs vs. 3ms), high dynamic range (120dB vs. 74dB), and low power consumption (∼10mW vs.
3W ). Event-cameras offer advantages compared to standard cameras in scenarios such as real-time
human-machine interface systems, robotics, wearable electronics, or vision-based edge-devices in
general, where efficient operation in challenging lighting conditions, low latency, and power con-
sumption [130] are paramount. They also find applications in computer vision and robotics tasks
such as object detection and tracking [151], gesture recognition [10], optical flow/depth, egomo-
tion estimation [260, 270, 272], and so on.
Event-based encoding results in asynchronous and sparse spatio-temporal data containing both
structural and temporal information in the form of a voxel. This type of input representation can
be naturally handled by asynchronous event-driven models such as SNNs, as will be discussed in
later sections. Nevertheless, most of the research using event cameras has been carried out either
in conjunction with traditional computer vision methods or with ANNs. This requires construct-
ing event frames in place of image frames (as with standard cameras) by accumulating events
over a certain time interval and subsequently considering them equivalent to image frames there-
after. For example, works such as References [260, 270, 272] utilize these event frames as channels
serving as input to an ANN. This leads to the loss of essential temporal information within the
accumulation interval as well as the temporal ordering of individual frames. Although, they show
promising potential when compared with approaches using standard cameras, they still do not
completely utilize the fundamental benefits of event cameras. This is because the asynchronous
and discrete nature of event camera data makes it incompatible to work with with traditional ANNs
in their native form that rely on frame-based information. In addition, ANN-based methods are
typically designed for pixel-based images following the photo-consistency assumption (color and
brightness of an object remains the same over image sequences). Certain works such as References
[22] and [23] explore using events directly for estimating visual-flow, however, they are limited in
terms of scalability to more complex problems. In light of this, SNNs inspired by the biological neu-
ron model, offer direct handling of event data providing asynchronous computations while also
exploiting the rich spatio-temporal dynamics and inherent input sparsity. Moreover, using event
data with SNNs eliminates the need for having any complicated spike encoding stage required for
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:8 N. Rathi et al.
ANN-based methods. The working principle of spiking neurons naturally offers compatibility for
event-based asynchronous processing distributed across SNN layers and, combined with compu-
tation on specialized neuromorphic hardware such as IBM’s TrueNorth [6] or Intel’s Loihi [46],
provides high energy-efficiency. Section 3.4 discusses some works combining event cameras with
SNNs along with the benefits as well as challenges associated with them.
Δw = ⎪⎨ Δt , (6)
⎩ −A− e − , if Δt = tpost − tpr e < 0
τ
where tpr e and tpost are the time instant of a pair of pre- and post-spikes, A+ , A− , τ+ , and τ− are
the learning rates and time constants governing the change, Δw, in the synaptic weight. Another
variant of the STDP rule performs both potentiation2 and depression based on the positive values
of Δt (Figure 3(d)) [52, 192]. The weight update is computed as
tpr e −tpos t
Δw = η × e τ
− o f f set × [wmax − w]μ , (7)
where Δw is the change in weight, η is the learning rate, tpr e and tpost are the time instant of pre-
and post-synaptic spikes, τ is the time constant, o f f set is a constant used for depression, wmax is
the maximum constrained imposed on the synaptic weight, w is the previous weight value, μ is a
constant that governs the exponential dependence on previous weight value. The weight update
is positive (potentiation) if the post-neuron spikes immediately after the pre-neuron and negative
(depression) if the spikes are far apart (Figure 3(d)). There are many other variants of STDP curves
in response to spike pair stimulation, including spike triplets and quadruplets [73]. STDP modu-
lates the strength of each synapse independently; while this is very powerful, it also introduces
stability problems. Therefore, mechanisms to maintain an appropriate level of distributed activity
throughout the network are explored in Reference [3]. In STDP, effective synapses are strength-
ened and ineffective synapses are weakened, which creates a positive feedback loop and leads to
stability issues.
In addition to non-volatile STDP, which is non-adaptive to different learning scenarios (due to
the absence of any forgetting mechanism in absence of a spike stimulus), synaptic learning mech-
anisms such as Short-Term Plasticity (STP) and Long-Term Potentiation (LTP) [140, 274]
2 Inneuroscience, the increase in synaptic strength (positive change in weight) is called potentiation, and the reverse (neg-
ative change in weight) is termed depression.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:9
Fig. 3. Unsupervised learning in SNN based on STDP learning rule; (a) the input image is converted to spike-
train, where the probability of spiking is proportional to the pixel intensity; (b) a 2-layer fully connected net-
work with inhibitory connections receives spike inputs and the weights are trained based on STDP learning
rule. The input-to-excitatory layer weights are trained, whereas the excitatory-to-inhibitory and inhibitory-
to-excitatory layer connections are fixed before training [52]; (c) visualization of input-to-excitatory layer
weights show that each excitatory neuron learns a unique class; (d) STDP learning rule (Equation (7)) based
on temporal correlation between pre- and post-synaptic spikes (η = 0.002, τ = 20ms, o f f set = 0.4, wmax =
1, w = 0.5, μ = 0.9).
bearing resemblance to the concepts of Short-Term and Long-Term Memory [13, 113] have also
been explored for online learning that can adapt synaptic weights to dynamically changing envi-
ronments. Catastrophic forgetting, a phenomena observed in continual learning where knowledge
about old tasks is lost when new tasks are learned, is a prevalent issue in neural networks, and
SNNs trained with STDP also suffer from it [173]. An adaptive weight decay mechanism [173] that
gradually forgets less important weights and learns new information in its place can effectively
learn new data and avoid catastrophic forgetting.
SNNs are being actively explored for unsupervised pattern recognition due to their ability to
learn input representations using STDP-based localized learning rules. The unsupervised feature
learning capability of SNNs was initially demonstrated using two-layer SNN consisting of an in-
put layer fully connected to neurons in the excitatory (or output) layer followed by an inhibitory
layer (Figure 3) [52, 74]. However, the performance of such two-layer SNNs is still limited to sim-
ple tasks like handwritten digit recognition. Convolutional SNNs, similar in architecture to their
deep learning counterparts [107], were proposed to address the limited scalability of two-layer
SNNs [63, 103, 121, 141, 221, 223, 230, 232]. The convolutional layers can be trained using STDP for
learning hierarchical input representations in an unsupervised manner, which are then fed to the
classifiers trained using supervised learning rules for inference. While STDP-trained convolutional
SNNs yield improvement over fully connected two-layered SNNs, the accuracy is still lower than
state-of-the-art performance on popular benchmark datasets. The challenge for STDP-trained con-
volutional SNNs is two-fold. First, it remains to be seen how deep these SNNs can be scaled, since
the current networks are limited to few convolutional layers. Second, it is inconclusive if STDP
alone can enable the deeper layers to learn complex input representations. STDP is powerful at
clustering and extracting low-level features from images but fails to generalize on composing high-
level features and, therefore, two or more layers of STDP learning do not provide much benefit [63].
In hierarchical learning with convolutional layers, it is necessary to combine learned features into
high-order features that can perform recognition. Some algorithms that can supplement the fea-
ture extraction of STDP can address these shortcomings. However, if these grand challenges are
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:10 N. Rathi et al.
met, then STDP would enable a new generation of low-cost (area/power/resource requirement),
local, and unsupervised learning framework, as opposed to their non-spiking counterparts (ANNs)
relying on global and supervised learning techniques. The advantages are multitude; unsupervised
learning enables the network to adapt to changing environments while local learning assists in im-
plementation of low-power learning circuit primitives. Local learning would remove the need for a
global error distribution that is essential for supervised learning, which, in turn, requires hardware
expensive circuitry.
Stochastic STDP: The STDP algorithm, de-
scribed earlier, works well for shallow net-
works (2–3 layers) with full precision weights.
However, networks with low precision, for ex-
ample binary weights, require a probabilistic
learning rule for efficient training and to avoid
rapid switching of weights between allowed
levels. The difference between the spike times
of post- and pre-neuron (tpost −tpr e ) is mapped
to the switching probability of the connecting
binary weight (Figure 4) [222]. The synaptic
weight switches from low to high state (Heb-
bian potentiation) with a constant probability Fig. 4. Stochastic STDP learning rule for binary
if the time difference is positive and below a synapse where the probability of the synapse switch-
certain threshold. If the time difference is neg- ing from high to low state (H→L) is a function of the
ative and above a fixed threshold, then the difference between the spike times of post- and pre-
weight switches from high to low state (Heb- neuron (tpost − tpr e ).
bian depression) with a constant probability.
Additionally, if the time difference is positive and above a certain value, then the synapse is de-
pressed (anti-Hebbian depression) due to low causality. The authors in Reference [222] mention
that anti-Hebbian learning enables the synapses to unlearn features lying outside the receptive
field like noisy background in images. The stochastic STDP learning rule enables training of SNNs
with binary weights [188] and can achieve similar accuracy as full-precision SNNs trained with
STDP [52] but with lower memory requirements [222]. Additionally, Reference [119] proposes a
semi-supervised training mechanism with STDP. The network is initialized with pre-trained STDP
weights trained in an unsupervised manner. Next, supervised gradient-based learning is employed
to fine-tune the weights and improve the accuracy with faster convergence. Even though STDP-
based local learning rules are not yet applicable for large-scale problems (like ImageNet), they
perform relatively well for energy-efficient clustering tasks.
2.3.2 Supervised Learning. Unlike unsupervised learning that does not require labelled exam-
ples for training, supervised learning derives its strength from large corpus of labelled examples.
In supervised learning, the network receives an input (image, text, or audio) and produces a
prediction score for all possible labels that the input can belong to, i.e., number of classes in the
datasets. The prediction is compared with the true label (one-hot vector with “1” for the correct
class and “0” for everything else) to determine the error/loss, and the network parameters (weights,
bias, etc.) are updated based on the gradients of the loss function with respect to the parameters.
Generally, in ANNs the supervised methods perform extremely well if a large number of labelled
samples are available. However, direct implementation of gradient-based methods in SNNs is
challenging because of the discontinuous and non-differentiable nature of the spiking neuron.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:11
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:12 N. Rathi et al.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:13
end
Loss = CrossEntropy(Y , ULT )
Wa ← Wa − ϵ ddW Loss // Weight update
a
V ← V − ϵ d Loss
dV // Threshold update
λ ← λ − ϵ d Loss
d λ // Leak update
end
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:14 N. Rathi et al.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:15
contrast, recurrent networks have feedback connections where the output of the neuron is routed
back as input with a time delay. Hence, the output is a function of current input and the previous
state of the neuron. SNNs implicitly have this relation, as the membrane potential depends on
the input and the potential at previous timestep (Equation (2)). Thus, SNNs have implicit recur-
rence through its internal state [166, 252], as shown in Figure 9. Note, however, in this section, we
discuss recurrent networks of spiking neurons (RSNNs) that have explicit recurrent connec-
tions on top of the intrinsic recurrent dynamics. In ANNs, recurrent connections are particularly
important to learn and generate sequences with long-range structure such as text, video, or au-
dio data. RNNs contain feedback loops in their connectivity structure, which allows the network
to remember prior computations or history of the input [57, 116, 179, 227]. The computation in
RNNs occur by unrolling the network over discrete timesteps. Once unrolled, RNNs can be viewed
as deep feedforward networks with shared weights capable of establishing long-range temporal
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:16 N. Rathi et al.
Fig. 8. Hybrid learning algorithm combining ANN-to-SNN conversion and spike-based backpropaga-
tion [194]. A trained ANN is converted to SNN by replacing ReLU neurons with IF neurons and threshold
balancing, where the threshold for each layer is determined sequentially. The converted SNN is incrementally
trained for few epochs (∼20) with spike-based gradient backpropagation to reduce the inference latency. The
trained SNN demonstrates better accuracy as well as lower inference latency (100–200 timesteps) on image
classification tasks. The inference latency can be reduced (5–10 timesteps) by replacing the Poisson spike
generator with an encoding layer that accepts analog values and outputs spike-trains [193].
dependencies. However, training RNNs has proven to be very difficult than their deep feedforward
counterparts due to exploding or vanishing gradients that deteriorate the overall learning [20, 81].
Today, Recurrent Neural Networks (RNNs), specifically Long Short-term Memory (LSTM)
networks [82], are widely used to process sequential inputs such as language or speech.
The biological brain is known to be composed of a network of several millions of neurons con-
nected via billions of recurrent synaptic links. These neuronal links are not randomly determined
but are optimized for specific tasks and have developed through long-term evolution. RSNNs at-
tempt to realize this neuronal model of the brain. This includes exploring connectivity in the model
as well as learning the synaptic weights. RSNNs aim to offer high energy-efficiency compared to
RNNs due to the computations being sparse and discrete in nature in the form of spikes. However,
the research in this field has been highly limited due to the challenges associated with training.
RSNNs generally have sub-optimal performance compared to equivalently trained RNNs, thereby
limiting their application to only simplistic tasks.
Research on developing algorithms to enable bio-plausible learning in RSNNs has been an open
problem for quite a few years with little success. There have been several works over the past
years that aim to train RSNN models in a bio-plausible manner [7, 71, 148, 235]. These aim at
learning the non-linear dynamics of RSNNs through feedback-based local learning rules. Authors
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:17
Fig. 9. Implicit recurrence in SNNs is similar to the recurrence relation in traditional RNNs. In RNNs, the
hidden state h(t ) retains the history of all previous inputs through explicit feedback connections, whereas,
in SNNs, the equivalent membrane potential V (t ) acts as a memory of past inputs. The output in RNNs is
a non-linear function of the hidden state, and in SNNs, the neurons fire based on their membrane potential.
Note, however, the inputs and outputs in SNNs are binary, whereas, in RNNs, they are continuous values.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:18 N. Rathi et al.
Fig. 10. Algorithmic and architectural optimization approaches for improving learning in liquid state ma-
chines (LSMs). (a) Vanilla LSM taking spike stream corresponding to each pixel in the image frame as input
and predicting outputs at the readout layer trained using backpropagation. (b) Learning alternate liquid
connections [175] to improve performance. Fast connections (having a high decay rate) are fixed, while the
slow connections (having a low decay rate) are learned using STDP-based local learning rules. (c) Ensem-
ble of smaller liquids with local classifiers followed by a global classifier for predicting final output leading
to better performance at smaller overall liquid size [220]. (d) Deep Hierarchical LSM with attention modu-
lated readout. Each hidden reservoir captures different sets of features that are selectively analyzed at the
attention layer [219].
and spiking LSTM where the compute-intensive parts of LSTM are converted to SNN for better
energy-efficiency on edge devices. The results showcase significantly lower energy consumption
while having a negligible drop in performance for sequential image classification on MNIST [117]
dataset and sequence-to-sequence translation on the IWSLT14 [36] dataset.
2.4.2 Liquid State Machines. Liquid state machines (LSM) have been explored as lightweight
architectures to handle spatio-temporal inputs in a bio-plausible manner [134, 136]. LSM is an
RSNN consisting of a sparsely connected reservoir (liquid) of excitatory and inhibitory spiking neu-
rons. The synaptic connections and their weights are randomly initialized and fixed a priori. This
leads to a simplistic lightweight structure while still inherently capturing the spatio-temporal input
information. The liquid essentially projects the inputs to a higher dimensional space while also re-
taining temporal information through recurrent connections. Given a large enough reservoir with
random and sparse interconnects, the high-dimensional representation generated can be linearly
classified using a fully connected readout layer. The work by Maass and Markram in 2004 [135]
investigates on the computational power of LSMs for real-time computing. Figure 10(a) shows a
vanilla LSM consisting of a reservoir of spiking neurons (excitatory and inhibitory) with random
connections. Several works have successfully employed LSMs for a variety of applications ranging
from gesture recognition [175], video activity recognition [219], reinforcement learning [124, 183],
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:19
and so on, with low compute costs. The major challenge with LSM lies in its inability to scale well
for real-life complex computing tasks without drastically increasing the reservoir size. Various ef-
forts have been made towards improving the learning capability of LSMs without increasing their
size significantly. One approach involves exploring mechanisms for training the liquid connections
to improve application accuracy at the cost of added complexity. Authors in Reference [258] ex-
plore using heterogeneous neurons with different behaviors and degree of excitability in the liquid
to aid learning. Reference [175] employs a Driven/Autonomous model approach [2] coupled with
Recursive Least Squares (RLS) rule and FORCE training [167] to train the liquid connections.
This approach consists of two different types of synaptic connections, namely, fixed fast connec-
tions (τf ast ) and learnable slow connections (τslow = 10 ∗ τf ast . This is shown in Figure 10(b).
Other efforts focus on optimizing the network architecture itself. Authors in Reference [220] pro-
pose to adopt a “divide and learn” strategy by utilizing multiple small liquids, each learning charac-
teristic patterns corresponding to a segment of input patterns. In addition, the input-to-ensemble
connections are trained using STDP-based learning rules. The intermediate outputs from the en-
semble of liquids are combined using a global classifier to generate the final output. This approach
is shown in Figure 10(c). However, authors in Reference [219] propose an architecture involving
multiple layers of liquids to form a deep hierarchical LSM. The hidden layers (liquids) are con-
nected using spiking winner-take-all encoders that extract and propagate the temporal features.
The representations generated by the different hidden layers (liquids) are condensed using an at-
tention function before final classification at the readout. Figure 10(d) demonstrates this method.
These approaches offer competitive accuracy and faster training time compared to a large single
liquid. Similar to these, Reference [162] presents an efficient partitioning method for hierarchical
mapping of large SNNs on reconfigurable neuromorphic hardware.
Recurrent networks of spiking neurons in the form of spiking LSTMs and LSMs thus show
promise for realizing energy-efficient implementations for real-world applications on resource-
constrained edge-devices. However, advancement in this field is not well paced when compared
to ANNs due to their limited learning ability.
3 APPLICATIONS
SNNs are well suited to process both static as well as sequential data due to their inherent recur-
rence. Also, SNNs can naturally process discrete spatiotemporal data from event sensors. In this
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:20 N. Rathi et al.
Table 1. Performance of SNNs on Different Image-recognition Datasets
Paper Neuron model Input coding Learning rule Network architecture Accuracy Timesteps
MNIST
[52] LIF Rate STDP 2 FC 95% 700
[156] IF Temporal Backprop 784FC-600FC-10FC 96.98% 167
[222] LIF Rate Stochastic STDP 36C3-2P-128FC-10FC 66.23%% 100
[268] LIF Encoding layer Backprop 15C5-P2-40C5-P2-300FC 99.53% 5
[267] IF Temporal ANN-to-SNN 32C5-P2-64C5-P2-1024FC-10FC 99.41% 8
[198] IF Temporal ANN-to-SNN LeNet-5 98.53% -
CIFAR10
[76] IF Temporal ANN-to-SNN VGG16 93.63% 2,048
[209] IF Rate ANN-to-SNN VGG16 91.55% 1,000
[199] IF Rate ANN-to-SNN 4 Conv, 2 FC 90.85% 400
[194] LIF Rate Hybrid VGG16 92.02% 200
[120] LIF Rate Backprop VGG9 90.45% 100
[222] LIF Rate Stochastic STDP 256C3-2P-1024FC-10FC 98.54% 100
[255] LIF Encoding layer Backprop CIFARNet 90.53% 12
[268] LIF Encoding layer Backprop CIFARNet 91.41% 5
[193] LIF Encoding layer Hybrid VGG16 92.70% 5
ImageNet
[76] IF Temporal ANN-to-SNN VGG16 73.46% 2,560
[77] IF Rate ANN-SNN VGG16 71.34% 768
[199] IF Rate ANN-SNN VGG16 49.61% 400
[209] IF Rate ANN-SNN VGG16 69.96% 300
[194] LIF Rate Hybrid VGG16 65.19% 250
[132] IF Encoding layer ANN-SNN VGG15 66.56% 64
[193] LIF Encoding layer Hybrid VGG16 69.00% 5
NMNIST
[123] LIF Event sensor Backprop 2 FC 98.74% 350 ms
[254] LIF Event sensor Backprop 3 FC 98.78% 300 ms
[213] SRM Event sensor Backprop 12C5-2P-64C5-2P-10FC 99.20% 300 ms
[96] SRM Event sensor Backprop 2 FC 98.88% 0.6 ms
DVS CIFAR10
[108] IF Event sensor ANN-to-SNN 4 Conv, 2 FC 65.61% 60
[255] LIF Event sensor Backprop 128C3-2P-128C3-256C3-2P-1024FC 60.5% 5
[253] IF Event sensor Backprop VGG7 62.5% 5
section, we discuss the application of SNNs in image classification, gesture recognition, sentiment
analysis, biomedical applications, and motion estimation. Additionally, we review the relevance of
SNNs as a defense mechanism against adversarial attacks.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:21
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:22 N. Rathi et al.
Fig. 11. (a) Input Event Representation. (Top) Continuous raw events and discrete grayscale images from
a DAVIS camera. (Bottom) Accumulated event frames between two consecutive grayscale images to form
the former and latter event groups. (b) Spike-FlowNet Architecture [118]. The 4-channeled input images,
as groups of former and latter events, are sequentially passed through the hybrid network. The SNN-block
contains the encoder layers of the network, while the ANN-block contains the residual and decoder layers.
The loss is evaluated after forward-propagating all consecutive input event frames within the time window.
(c) The predicted optical flow compared with the provided ground truth and EV-FlowNet [270].
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:23
Table 2. Average Endpoint Error (AEE) Comparisons between EV-FlowNet [270], Spike-FlowNet [118], and
Fisher-Rao Metric-based Method [142] on the MVSEC Dataset
dt=1 frame dt=4 frame
indoor1 indoor2 indoor3 outdoor1 indoor1 indoor2 indoor3 outdoor1
EV-FlowNet [270] 1.03 1.72 1.53 0.49 2.25 4.05 3.45 1.23
Spike-FlowNet [118] 0.84 1.28 1.11 0.49 2.24 3.83 3.18 1.09
Fisher-Rao Metric [142] 1.88 - - - 2.95 - - -
Lower is better.
cause the neural network to misclassify the input sample. The modified samples are subtle and
undetectable by a human observer. Such systems can seriously undermine the security of neural
networks for mission-critical applications. For example, a slightly modified image of the “Stop sign”
is classified as a speed limit sign [60]. The defense mechanisms include activation pruning, input
discretization, and non-linear transfer functions. SNNs inherently possess most of these features
and are better suitable to handle adversarial attacks [14, 212]. The input to SNNs is binary, the dy-
namics of the LIF neuron are non-linear, and the activations are sparse. Also, SNNs can exploit the
time and leak parameter to improve the resiliency of the network. The authors in Reference [212]
showed that SNN trained with spike-based backpropagation, employing LIF neurons, and lower
number of timesteps perform better under adversarial attack compared to ANN as well as ANN-to-
SNN-converted networks that generally use IF neuron and require a larger number of timesteps.
The authors in Reference [14] studied the robustness of SNNs under different input coding meth-
ods with random and adversarial perturbations. Networks trained with rate coding performed
better compared to temporally coded network. The reason may be the particular form of temporal
coding, first-to-spike, which allows only one spike per neuron, thereby reducing redundancy. To
generate an adversarial sample, the gradient of the loss with respect to input is added to the input.
In SNNs, the gradient is continuous, whereas the input is binary. The authors in Reference [128]
proposed a method to convert continuous gradient to spike-compatible ternary gradients with
probabilistic sampling. Adversarial robustness is an active area of research, and SNNs with their
unique properties can potentially provide a low-power defense mechanism.
4 NEUROMORPHIC HARDWARE
We have described how neuromorphic computing presents a novel paradigm with diverse neuronal
functionalities as well as synaptic learning algorithms. In this section, we delve into the design of
neuromorphic hardware [41, 231] that can faithfully emulate the algorithmic functionalities and
leverage the inherent computational efficiency offered by SNNs.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:24 N. Rathi et al.
processing, which of course can be leveraged through GPU or TPU acceleration. However, SNN
workloads tend to show event-driven characteristics, i.e., the neurons process data asynchronously,
leading to considerable sparsity in its activations at a given time. Moreover, spiking activity of the
neurons can diminish in deeper layers in a deep SNN. Hardware systems such as GPUs and TPUs
are designed to efficiently leverage data-parallelism, but they fail to exploit the high temporal spar-
sity as well as spatial sparsity of activations in SNNs. Additionally, SNNs have memory-intensive
data-structures such as membrane potential that need to be processed across timesteps, an over-
head that is not entirely mitigated by today’s digital accelerators. There have been some proposals
on using analog capacitors as memory elements in neurons [89] for “state-ful” processing, which
could serve as a good design choice for mixed-signal SNN accelerators. Finally, the training algo-
rithms of SNNs could involve both global as well as local weight updates across time, as described
in Section 2, which can introduce further bottlenecks in efficient execution of such operations.
Considering the limitations of hardware systems designed to accelerate machine learning work-
loads, researchers have explored several avenues [6, 11, 21, 31, 46, 89, 208, 231, 262] across the
design stack from devices and circuits to architectural solutions to address the unique challenges
presented by neuromorphic computing workloads such as SNNs. In the next subsection, we will
discuss how intelligent design of basic compute primitives forms the platform for acceleration of
SNN workloads.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:25
Fig. 12. (a) Mesh architecture adopted in Loihi for transmitting spikes across various neurons [46], (b) Multi-
core implementation of neuromorphic processors connected to a common router [171] and (c) Near-memory
processing architecture with a 2-D array of processing elements, with each PE consisting of internal storage
for neuronal functionalities as well as computing elements for synaptic computations [5].
membrane potential update. The local synchronous digital circuit design facilitates low complexity
design for complex neuron functionality.
Some extensions of mesh-based architectures involved hierarchical mixed-mode routing sys-
tems where various distinct levels of routers are deployed. For example, in Reference [154], three
levels of routers are deployed: one responsible for local traffic, a second responsible for non-local
events for nearby cores, whereas a third level router is responsible for long-distance communica-
tion. This chip, also knows as DYNAPs, demonstrates the efficacy of the novel digital communica-
tion scheme along with emulation of neuro-synaptic functionalities on CMOS analog circuits.
Event-driven systems can also be incorporated using 2-D processing arrays connected to a com-
mon packet router, as adopted by the SpiNNaker Project [171], shown in Figure 12(b). Events are
communicated by maintaining routing tables for event sources. The processing elements simply
emit a spiking event in form of a packet with the address of the spiking neuron and the router
communicates the packets to any of the other processing cores where the selected routes are de-
termined by the routing table.
In addition to the aforementioned approaches to building large-scale neuromorphic systems, re-
searchers have explored scalable implementations that faithfully realize complex neuronal and
synaptic functions using analog circuits. For example, the BrainScaleS system [203] performs
wafer-scale integration of multiple instances of the HICANN ASIC, which implements the Adap-
tive Exponential Integrate-and-Fire Model using analog circuit along with synapses capable of
performing STDP-based learning. Neurogrid [21] is a mixed-signal large-scale system to perform
brain simulations and visualization. It used a combination of highly bio-plausible neural and synap-
tic sub-threshold analog circuits with detailed modeling of soma, dendritic trees, axons, and so
on. Multiple cores communicate with each other using digital Address Event Representation
(AER) protocol. The basic skeleton of neuromorphic systems with AER architecture containing
an Integrate-and-Fire Array Transceiver (IFAT), a look-up table (LUT) to store synaptic
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:26 N. Rathi et al.
Fig. 13. (a) RESPARC as a pool of NeuroCells (b) Macro Processing Engine - The mPE receives input spikes
over the bus and the switch network, which is processed by the crossbars to produce output currents: C1, C2,
C3, C4. The crossbar currents get integrated into the neurons to produce output spikes that are then sent to
the target neurons over the network.
connections, and AER protocol for communication between neurons is also used as a popular
approach. Variants of IFAT systems have been explored using different kinds of neuronal cir-
cuits [239, 240]. A reconfigurable mixed-signal design called ROLLS, targeting both emulation
of complex neuronal and synaptic learning functionalities, and image classification tasks, have
been proposed [187]. Although this design derives motivation from various previous designs, it
performs a holistic integration and performs complex tasks in a fully on-chip format.
Large-scale reconfigurable systems [244] have been explored to overcome the Liebig’s law for
neuromorphic hardware, where the performance can be limited by the component with short-
est supply. Such reconfigurable systems consist of arrays of identical components, which can be
configured as LIF neurons, a learning synapse with STDP-based rules, as well as an axon with
trainable delay. Efficacy of such a system has been prototyped using a field programmable gate
array (FPGA)-based design and later demonstrated on Silicon.
4.2.2 Efficient Computing Architectures. However, researchers have also explored tiled architec-
tures to reduce data movements over the network. SPARE [5] and RESPARC [11] are examples of
tiled architectures built with CMOS and post-CMOS primitives, respectively, that achieve signifi-
cant improvements in efficiency for SNN execution. As shown in Figure 12(c) Each tile in SPARE is
composed of ROM-Embedded RAM-based arrays, where RAM stores the weights for a small par-
tition of the SNN, and ROM stores lookup tables for executing transcendental operations that can
perform complex neuron functions. During an SNN execution, each tile performs multiply-and-
accumulate and transcendental operations on its portion of weight data (weights remain station-
ary), thereby enabling near-memory computing. Typical SNN execution on SPARE is performed
in a time-multiplexed manner, where each layer is mapped on the tiled architecture, one at a time.
The currently mapped layer computes its outputs before the next layer is mapped.
RESPARC further extends the tiled architecture by leveraging the high storage density of hlmem-
ristive crossbars to enable large number of on-chip tiles. However, expensive crossbar writes limit
the applicability of a time-multiplexed architecture, where the crossbars are reused across layers by
re-programming weight matrices and executing the corresponding dot-product operations. Con-
sequently, a spatial architecture where the weight data of an entire SNN are pinned to crossbars
located across multiple tiles is more efficient, as it leverages the benefits of high storage density
while alleviating the costly writes. Figure 13 illustrates a tiled architecture built with post-CMOS
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:27
primitives (memristive crossbars). It is worth noting that while such a spatial architecture is ef-
fective for SNN inference, it may still suffer from costly writes in SNN training where weight
data is updated frequently. Recent explorations on hybrid ANN-SNN architectures by partitioning
the network into non-spiking and spiking counterparts have also shown promise to mitigate the
algorithmic inference latency disadvantage of SNNs [216].
Alternatively, an SNN accelerator such as Spinal − Flow [163] has explored techniques to tackle
the iterative memory access overhead over multiple timesteps for updating membrane potential
and weight read. They propose a novel dataflow that leverages the temporal re-use patterns in
SNN workloads. By creating a compact sorted list of spikes, Spinal − Flow sequentially walks
through the spikes of all timesteps in a layer to yield significant speedups due to activation sparsity.
The architecture uses an output stationary dataflow to map neurons to the processing elements
across all timesteps. This allows the membrane potential to accumulate for an input across all the
timesteps in a layer, before proceeding to next layer, thereby eliminating additional storage and
data-movement costs for the membrane potential data-structure. Such a dataflow maximizes re-
use of membrane potential, in contrast to dataflows followed in ANN accelerators, which aim to
maximize re-use of data-structures such as inputs, weights, and outputs.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:28 N. Rathi et al.
Fig. 14. Illustration of CMOS-based neuronal circuits. (a) Circuit shows basic comparison and firing behavior
of an IF neuron [87] (b) Circuit shows more complex functionalities of an IF neuron, such as spike frequency
adaptation, leak as well as refractory period [236].
CMOS technology, which has been a popular thrust of research in neuromorphic circuit design.
Next, we will delineate how novel memory technologies can be effectively used to accelerate the
aforementioned computation and realize the basic processing units in a compact fashion.
4.4.1 Neuronal Circuits. The classical case of representing neuronal circuits on silicon was
based on equivalence between ion transportation in biological neurons and electron transport in
transistors. Researchers have shown that the ionic channel transport in biological neurons can
be modeled using very few transistors, operating in the sub-threshold domain [61]. Such sub-
threshold transistors have been also leveraged to implement Hodgkin-Huxley-based neuron mod-
els [263] using programmable kinetics of the gating variables.
However, the more widely popular neuronal functionality is IF or LIF, which we have described
in detail in Section 2. The abstract view of such a neuron primarily consists of a capacitive unit
that holds and updates the membrane potential, a comparison unit, and a thresholding unit. The
most primitive form of IF neuron was conceptualized back in the late ’80s [143] with a simple
feedback circuit that could produce fixed width, fixed height voltage pulses, where the rate of the
pulses was proportional to the input injection current (arriving from synapses), and the temporal
characteristics of the pulses represented the shape of the input current waveform. A subsequent
design of IF neuron circuits in the analog domain involved incorporation of a comparator unit,
as shown in Figure 14(a) [145, 236]. In this design, the injected current is integrated using the
membrane capacitance, Cmem , and then fed to a comparator circuit to compare the resulting Vmem
against the threshold voltage Vt hr . The capacitive feedback, C f b , in both circuits ensures that small
fluctuations of Vmem around Vt hr do not affect firing activity. The updated circuit also has a control
for the refractory period of the neuron. Initially, after firing, Vmem decreases linearly, but once it
falls below Vt hr , the output of the first inverter sets high, leading to discharge of capacitor Cr at a
controlled rate using Vr f r . This ensures Vmem does not start to increase, as long as voltage at Cr is
above a certain value.
Analog IF neuron circuits have evolved significantly over time by incorporating more features,
such as reset, leak, as well as spike frequency adaptation, and so on. Figure 14(b) shows a fairly
complex and generalized IF neuron circuitry [89]. Such a neuron circuit consists of an input dif-
ferential pair integrator (DPI). The integration is performed by the membrane capacitor, Cmem ,
and the spike generation scheme is implemented using an inverting amplifier with positive feed-
back. The reset behavior is implemented using transistor M13, and together with transistor M21,
refractory-period behavior is implemented. Transistors M5–M10 produce the current proportional
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:29
Fig. 15. Illustration of a fully digital IF neuron. Fig. 16. Illustration of three different kinds of CMOS-
based synaptic circuits.
to the neuron’s firing rate, and hence lead to spike frequency adaptation mechanism. The modified
equation of the implemented LIF neuron is:
d
Cmem Vmem = (Idpi − Iτ ) − Iahp + I f b . (10)
dt
This generalized neuron realizes an adaptive, exponential IF neuron.
Other types of analog neuron implementations based on CMOS have also been explored, such as
the log-domain LPF neuron [12], which implements a reconfigurable IF circuit. Yet another type of
IF neuron circuit is called the “Tau-Cell neuron” [191, 237], which uses current-mode circuits, and
the membrane potential is represented as a current. More compact IF neuron circuits have been
proposed using above-threshold transistors, such as the quadratic IF neuron [249]. Such a neu-
ron is loosely based on the Izhikevich neuron model [92] where two state variables are maintained
across two separate capacitors instead of one. Leaky IF neurons have been also implemented using
switched capacitor circuits where the switches are used to implement leak behavior between the
membrane potential and resting potential [64]. The switch capacitance technique motivates more
digitally inspired neuron designs. One such design involves weighted current mirror circuits acti-
vated by a binary coded digital weight [211]. After, the neuron generates a positive and negative
spike based on the excitatory/inhibitory nature of the synapses.
Leading from digital inspiration of the previous neuron design, IF neurons have also been
explored in fully digital mode [33]. A digital adder and accumulator along with comparator circuits
can be used to implement integration and spike-generation behavior of IF neurons, shown in
Figure 15. Leak in such a neuron is implemented by a fixed weight synapse driven by a global clock.
4.4.2 CMOS-based Synaptic Circuits. Synaptic circuits were first conceived by Carver
Mead [145] as pulsed current-sources with transistors operating in the subthreshold domain, as
shown in Figure 16. The output of the circuit is simply a synaptic current, Isyn , which is a pulse with
width proportional to the width of the input spike. An extension to the aforementioned scheme
involves exponential decay of the post-synaptic spike using the mechanism of charging and dis-
charging of the node Vsyn [12, 115]. When an input pulse is applied, the node Vsyn decreases
linearly, where the rate of decrease is governed by the difference in current through the transis-
tors biased using Vt au and Vw , respectively. Consequently, the synaptic current Isyn exponentially
increases. When there is no input spike, Isyn discharges exponentially at the rate governed by
current through the transistor biased using Vt au . An alternative synaptic circuit can be imple-
mented using a differential pair, more commonly known as a differential pair integrator (DPI)
circuit [89]. During the charging phase, current through one branch of the differential pair repre-
sents the input current to the synapse. This DPI synapse has more independent control of the time-
constant of charge and discharge of the synaptic voltage node. Implementation of plasticity, both
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:30 N. Rathi et al.
Fig. 17. Illustration of CMOS circuits consisting of subthreshold transistors exhibiting various complex
synaptic functionalities such as STDP, short-term depression (STD), as well as integration [89].
short-term and long-term, in synapses requires additional circuitry. Unsupervised learning neces-
sitates automatic update of weights based on internal signals rather than providing the update
signals externally for individual synapses. Figure 17 shows an excitatory synaptic circuit [89] that
implements both short-term plasticity (STP) as well as long-term plasticity (STDP) while us-
ing a current mirror integrator circuit (CMI) to enable the integrating behavior of a synapse.
The CMI circuit operates similar to the integrating circuits we have discussed before. On arrival
of a spike on the “pre” node, the CMI’s integrating capacitor gets charged, while in absence of the
spike, the charge decays through the diode connected transistor.
The STP in Figure 17 operates on the synaptic weight voltage Vw 0 . When spikes are applied
to the “pre” node, the synaptic weight reduces at a rate controlled by the bias voltage. Thus, the
synaptic weight goes through a short-term depression, i.e., it is maximum at onset of the spikes and
reduces gradually on consecutive application of spikes at the “pre” node. The STDP mechanism,
described in Section 2, is implemented using the STDP circuit shown in Figure 17. Specifically,
the circuit modulates the analog voltage Vw 0 based on the relative difference between the pre-
and post-spike times. Two waveforms, Vpot and Vdep , are generated based on the presynaptic and
postsynaptic pulses, respectively. The pre- and post-spikes activate two transistors that control the
current that cause increase and decrease in the Vw 0 node. The bias voltages Vp and Vd set a limit
on the current injection and removal from the capacitance at the node Vw 0 . The transistors in the
middle branch that carry currents Ipot and Idep operate in the subthreshold region to enable an
exponential relationship required for STDP. The bistability circuit shown in Figure 17 is required
to hold the analog value of synaptic weight, since CMOS capacitors are prone to leakage of charge.
In absence of spiking activity, the bistability circuit generates a constant leak current, which pulls
the weight node Vw 0 towards one of the two stable states. The voltage Vt hr that is fed to the
comparator is set externally. If STDP circuits cause the synaptic weight to fall below the threshold,
then the bistability circuits allow a negative current to flow to drive the weight towards an analog
value representing the depressed state. When synaptic weight increases above the threshold, a
positive current flows to drive the weight to a high state. Similar approach has been followed in
other works with different variants of plastic synapses. For example, one chip called ROLLS [187]
uses separate synaptic arrays for short-term plasticity and long-term plasticity along with custom
neuronal circuits based on a variant of the adaptive exponential IF neuron circuit described earlier.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:31
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:32 N. Rathi et al.
Fig. 19. Neuron devices and circuits based on various NVM technologies, such as PCM, RRAM, Spintronics,
exhibiting IF characteristics.
memory to read necessary data such as look-up tables, and so on. These look-up tables can
be used to store transcendental functions, as well as complex neuronal functions required for
SNNs [5].
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:33
the bit resolution available for programming, programming energy and speed, ratio of the maxi-
mum to minimum device conductance, reliability, and endurance [112, 201, 251, 273].
4.6.1 Neuronal Circuits. The rich device physics of NVM technologies can enable efficient emu-
lation of neuronal and synaptic functionalities in single devices. Various material stacks are being
actively investigated as synaptic and neuronal elements. For instance, phase change materials
(PCM) are being currently investigated where a chalcogenide material sandwiched between two
electrodes can be switched between amorphous and crystalline states due to the heating effect in-
duced by current flowing through the electrodes [24, 250]. The variable current through the device
in different states can be used to implement integrate and fire neurons, where the membrane po-
tential is temporally integrated by successive crystallization pulses. The device changes its state to
crystalline beyond a threshold, and it is reset to the amorphous state subsequently. The reset mech-
anism introduces stochasticity in the IF neuron, since each individual amorphous state is different.
Researchers have explored applications such as temporal correlation between data streams using
such stochastic IF neurons [233]. PCM-based neurons have been experimentally demonstrated in
both electrical [234] and photonic domains [39].
Metal oxides such as SrTiO3 [248],
as well as HfOx [72], TiOx [195],
have been explored as an alternative
material for constituting a class of
device, known as RRAMs. Alike
PCM, RRAM devices can also be
used as LIF neurons by connecting
it parallel to an external capacitance,
as shown in Figure 19. The internal
membrane potential is encoded
in the conductance of the RRAM
device. When the RRAM is in its
ON conductance state, the current
through the circuit suddenly in-
creases, which leads to an analog
spike. The voltage across the RRAM
device represents the LIF character-
istics. Other types of RRAM-based Fig. 20. (a) STDP characteristics can be emulated in RRAM
neurons involve controlling the synapses using repeated pulsing schemes [©2014 IEEE. Reprinted,
with permission, from Wang IT et al. In 2014 IEEE International
migration of oxygen vacancies using
Electron Devices Meeting 2014 Dec. 15 (pp. 28.5.1–28.5.4). IEEE].
post-synaptic pulses [114, 146]. (b) STDP learning in PCM synapses [111] emulates neuroscientic
RRAM and PCM devices are often experiments [186] (right) using series of pulses of increasing
characterized with high switching amplitude [Reprinted (adapted) with permission from Nano
times and power over the years. Letters 2012 May 9;12(5):2179–86. Copyright (2012) American
Although, more recently, through Chemical Society].
extensive material research, the
switching times of RRAM devices have been brought down a few ns timescale [1]. As an
alternative technology, researchers have explored Spintronic devices such as Magnetic Tunnel
Junctions [65] as LIF [93] as well as stochastic neurons [205]. MTJs are formed using two
ferromagnetic (FM) nanomagnets with a spacer layer (MgO) sandwiched between them. MTJ
can exist in two different resistance states based on the relative direction of magnetization
of the two FM layers. The spin dynamics of an FM can be expressed effectively using the
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:34 N. Rathi et al.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:35
Fig. 21. STDP learning scheme in DWM-based spin Fig. 22. Illustration of a typical NVM
synapse [204] using peripheral transistors. [Reproduced with crossbar with necessary peripheral
permission Physical Review Applied 2016 Dec. 8;6(6):064003., circuits.
Copyright 2017 American Physical Society].
in sub-threshold regime, as shown in Figure 21. The transistor M ST DP operates in the sub-threshold
regime, and a linearly increasing gate voltage is applied at the time when pre-spike arrives. When
the post-spike arrives, a programming current, exponentially dependent on the time difference of
the spikes, flows through the heavy metal layer. The transistors MA2 and MA4 in Figure 21 can
be removed when connecting multiple such synaptic devices in a crossbar fashion [204]. Due to
the low resistance range of Spintronic devices, encoding multiple states can hurt functionality. To
that effect, regular MTJs encoding binary information can also be used as synapses. A variant of
STDP learning, namely, stochastic STDP, has been explored in binary synapses, which can lead
to significant energy-efficiency [224, 238] due to low operating currents. To achieve multi-level
stochastic STDP, researchers have explored multiple MTJs to represent a single synapse [266].
The switching behavior in FEFETs produces bi-stability that makes them suitable for synaptic
operations. FEFET-based synapses can also achieve multiple levels and have been experimentally
demonstrated [43, 94, 161]. STDP-based learning scheme has been achieved through conductance
potentiation and depression using gradual threshold voltage turning.
4.6.3 NVM Crossbars. We have discussed briefly how NVM devices can be arranged in a matrix
formation to form a crossbar to constitute ultra-dense memories as well as serve as primitives for
highly parallel “In-Memory Computing.” More specifically, such crossbars based on NVM devices
can perform matrix-vector multiplication (MVM) operations efficiently. The conductance of
the NVM devices stores the values of the matrix elements while the vector is provided as voltages
to the word-lines of the crossbar, as shown in Figure 22. The multiplication occurs in the NVM
device itself, and the product, which is the current through each device Ii j = Vi ∗ G i j , is summed
up in the bit-line.
In addition to performing efficient MVM synaptic computations, learning mechanisms such as
STDP have been demonstrated in NVM crossbars based on all the aforementioned technologies.
For example, in PCM technology, small-scale arrays of size 10 × 10 consisting 1T-1R PCM cells
have been used to perform on-chip STDP [58, 59]. With an additional transistor, the arrays could
be further scaled [104]. PCM-based crossbars have also been explored [38] and even experimentally
demonstrated [62] in the photonic domain. Crossbars based on RRAM devices have distinct simi-
larity to PCM crossbars. As a result, in situ learning has been proposed in RRAM crossbar arrays for
single layer neural networks [180, 245]. RRAM and PCM-based crossbars have been further scaled
for supervised learning to demonstrate deeper networks [9, 32, 102, 126, 176, 247]. Although such
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:36 N. Rathi et al.
demonstrations have been limited to ANN workloads, it can be used to map SNNs trained using
supervised learning algorithms. Spintronic/magnetic tunnel junction-based crossbars, however,
have not been widely demonstrated due to very low ON/OFF resistance ratios due to fabrication
challenges. A lower ON/OFF ratio can pose limitations on application accuracy. Simulation stud-
ies have explored STDP-based learning [204] at an array level based on higher predictive ON/OFF
ratios [80], which has been closely matched through experimental demonstration [86].
5 DISCUSSION
Artificial intelligence has become ubiquitous in diverse fields and is transforming the world around
us. The current powerful machine learning models are mostly deployed on large cloud-computing
systems. To enable large-scale adoption of edge intelligence or TinyML on IoT devices, there is
a need to rethink the prevailing solutions. To that effect, techniques such as model compression,
pruning, and quantization in ANNs have shown significant promise, but to achieve brain-like ef-
ficiency that may not be enough. Brain-inspired neuromorphic computing, specifically SNNs, can
assist in bridging the energy gap. SNNs operate as dynamical systems with temporally evolving
quantities that define the dynamics of the neurons and synapses. Although recent gradient-based
methods consider the spike times for parameter updates, the backpropagation through time is
memory-intensive. Therefore, further research is needed to develop learning algorithms that can
efficiently employ the rich timing information to achieve faster learning with fewer resources.
As discussed in Section 2.4 and Figure 9, SNNs can behave as RNNs due to their inherent re-
currence. This provides a unique opportunity to employ SNNs for both static and sequential tasks.
Research, so far, is mainly focused on using SNNs for static image classification. Other tasks such
as language modeling, speech recognition, machine translation, and so on, where RNNs have per-
formed well, may be explored for SNNs. In Section 2.1, we discussed the IF/LIF neuron model
whose simplicity is its strength, but further research in neuron modeling, structural plasticity, and
the role of dendrites in efficient learning is required to mimic the complex dynamics of the brain.
Also, the individual advancement in neuron modeling and learning algorithms may not result
in a compatible solution. The focus should be on the co-design of neuron models and learning
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:37
algorithms that can achieve an optimal tradeoff between complexity and trainability. Similarly,
the backpropagation-based algorithms may not be very suitable for deep SNNs, and the various
hardware-friendly local learning methods may be more apt for performing computations on edge.
Unsupervised STDP-based methods work well on shallow networks for simple tasks but fail to opti-
mize deep networks. Other variants of STDP learning in combination with homeostasis and local
gradient-based techniques may be explored to discover better learning mechanisms. Batch Nor-
malization [91] has proven to be a successful technique in training deep ANNs, but its application
in SNNs is limited and has not resulted in significant improvements. Moreover, edge applications
require real-time online learning (batch size = 1), which raises the question of whether batch nor-
malization can be done. SNNs are highly successful in handling spatiotemporal data from event
sensors. They perform relatively better than ANNs at tasks such as motion estimation and classi-
fying images from neuromorphic datasets. Therefore, further research is needed to identify more
such tasks or efficiently convert other tasks in a discrete form that is more suitable for SNNs.
In Section 4, we have described the fundamental requirements as well as recent explorations to-
ward building neuromorphic emulators and accelerators. The advancements in the field of neuro-
morphic hardware has closed the gap that exists between the algorithm space and their amenability
in today’s general-purpose as well as domain-specific accelerators. Further, the research in neu-
romorphic hardware leverages the unique features of SNN workloads that can potentially lead
to low computational complexity as well as energy-efficiency. Despite the development in the
space of neuromorphic hardware, significant evolution is required to truly realize the potential of
energy-efficiency offered by SNN workloads. Neuromorphic algorithms are constantly evolving,
as new features are being incorporated in the workloads, necessitating neuromorphic hardware
to evolve as well. Over the past two decades, we have overseen massive leaps in building analog
neuromorphic circuits in CMOS as well mapping such functionalities directly in single NVM de-
vices. The primary challenge toward building efficient neuromorphic systems is scalability. We
have delved into large-scale CMOS neuromorhic systems, which include wafer-scale integration
as well as multi-chip modules. However, scaling NVM-based neuromorphic systems remains a
challenge. First, the device variability and reliability poses a big challenge in realizing scalable sys-
tems using NVM-based neural primitives. Second, NVM-based compute primitives perform the
synaptic computations in the analog domain. Analog computing in NVM-based primitives [37] is
erroneous in nature and requires modeling and sufficient mitigation [210] for reasonably accurate
operation. Various algorithmic strategies have been explored to mitigate and potentially leverage
device mismatch and variability in a beneficial sense. For example, stochasticity in synapses [222]
and neurons [197] have been shown to be beneficial towards generalization in SNNs.
In spite of significant efforts on building large-scale neuromorphic systems in CMOS, one critical
aspect of such systems is implementing neuro-synaptic functions using analog circuits. Analog
circuits are prone to threshold voltage variations across transistors. Although various electrical
engineering techniques can be used to minimize device variations, analog design of silicon neurons
and synapses requires additional circuitry for incorporating various kinds adaptation and feedback
mechanisms. There have been explorations in devising techniques to counteract such variations in
analog circuits [164]. Researchers also argue that a certain degree of imprecision is often beneficial
to neural computing, drawing analogies with imprecise and diverse computing patterns in the
biological brain [136]. Further, device mismatch has also been demonstrated [149] to enable stable
receptive fields and balanced network activity in recurrent SNNs. Although there is significant
merit to such arguments, it is also necessary to devise engineering solutions to circumvent the
effect of device variations in large-scale neuromorphic systems, especially as CMOS technology
has scaled down to below 10 nm.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:38 N. Rathi et al.
REFERENCES
[1] 2016. International roadmap for devices and systems (IRDS). IEEE. Retrieved from https://irds.ieee.org.
[2] L. Abbott, Brian DePasquale, and Raoul-Martin Memmesheimer. 2016. Building functional networks of spiking model
neurons. Nature Neurosci. 19 (02 2016), 350–355. DOI:https://doi.org/10.1038/nn.4241
[3] Larry F. Abbott and Sacha B. Nelson. 2000. Synaptic plasticity: Taming the beast. Nature Neurosci. 3, 11 (2000),
1178–1183.
[4] Amogh Agrawal, Mustafa Ali, Minsuk Koo, Nitin Rathi, Akhilesh Jaiswal, and Kaushik Roy. 2021. IMPULSE: A 65-nm
digital compute-in-memory macro with fused weights and membrane potential for spike-based sequential learning
tasks. IEEE Solid-state Circ. Lett. 4 (2021), 137–140.
[5] Amogh Agrawal, Aayush Ankit, and Kaushik Roy. 2018. SPARE: Spiking neural network acceleration using rom-
embedded RAMs as in-memory-computation primitives. IEEE Trans. Comput. 68, 8 (2018), 1190–1200.
[6] Filipp Akopyan, Jun Sawada, Andrew Cassidy, Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, Nabil Imam, Yutaka
Nakamura, Pallab Datta, Gi-Joon Nam et al. 2015. TrueNorth: Design and tool flow of a 65 mw 1 million neuron
programmable neurosynaptic chip. IEEE Trans. Comput.-aid. Des. Integ. Circ. Syst. 34, 10 (2015), 1537–1557.
[7] Alireza Alemi, Christian K. Machens, Sophie Denève, and Jean-Jacques E. Slotine. 2018. Learning nonlinear dynamics
in efficient, balanced spiking networks using local plasticity rules. In 32nd AAAI Conference on Artificial Intelligence.
AAAI Press, 588–595. Retrieved from https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17438.
[8] Stefano Ambrogio, Nicola Ciocchini, Mario Laudato, Valerio Milo, Agostino Pirovano, Paolo Fantini, and Daniele
Ielmini. 2016. Unsupervised learning by spike timing dependent plasticity in phase change memory (PCM) synapses.
Front. Neurosci. 10 (2016), 56.
[9] Stefano Ambrogio, Pritish Narayanan, Hsinyu Tsai, Robert M. Shelby, Irem Boybat, Carmelo Nolfo, Severin Sidler,
Massimo Giordano, Martina Bodini, Nathan C. P. Farinha et al. 2018. Equivalent-accuracy accelerated neural-network
training using analogue memory. Nature 558, 7708 (2018), 60.
[10] A. Amir, B. Taba, D. Berg, T. Melano, J. McKinstry, C. Di Nolfo, T. Nayak, A. Andreopoulos, G. Garreau, M. Mendoza,
J. Kusnitz, M. Debole, S. Esser, T. Delbruck, M. Flickner, and D. Modha. 2017. A low power, fully event-based gesture
recognition system. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 7388–7397. DOI:https://
doi.org/10.1109/CVPR.2017.781
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:39
[11] Aayush Ankit, Abhronil Sengupta, Priyadarshini Panda, and Kaushik Roy. 2017. RESPARC: A reconfigurable and
energy-efficient architecture with memristive crossbars for deep spiking neural networks. In 54th Annual Design
Automation Conference. ACM, 27.
[12] John V. Arthur and Kwabena Boahen. 2004. Recurrently connected silicon neurons with active dendrites for one-shot
learning. In IEEE International Joint Conference on Neural Networks. IEEE, 1699–1704.
[13] Richard C. Atkinson and Richard M. Shiffrin. 1968. Human memory: A proposed system and its control processes.
In Psychology of Learning and Motivation, Vol. 2. Elsevier, 89–195.
[14] Alireza Bagheri, Osvaldo Simeone, and Bipin Rajendran. 2018. Adversarial training for probabilistic spiking neural
networks. In IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC).
IEEE, 1–5.
[15] Chiara Bartolozzi and Giacomo Indiveri. 2007. Synaptic dynamics in analog VLSI. Neural Computat. 19, 10 (2007),
2581–2603.
[16] Guillaume Bellec, Darjan Salaj, Anand Subramoney, Robert Legenstein, and Wolfgang Maass. 2018. Long short-term
memory and learning-to-learn in networks of spiking neurons. In International Conference on Advances in Neural
Information Processing Systems. 787–797.
[17] Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang Maass. 2019. Biolog-
ically inspired alternatives to backpropagation through time for learning in recurrent neural nets. arXiv preprint
arXiv:1901.09049 (2019).
[18] Guillaume Bellec, Franz Scherr, Anand Subramoney, Elias Hajek, Darjan Salaj, Robert Legenstein, and Wolfgang
Maass. 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Commun. 11,
1 (July 2020), 3625. DOI:https://doi.org/10.1038/s41467-020-17236-y
[19] Marc Benayoun, Jack D. Cowan, Wim van Drongelen, and Edward Wallace. 2010. Avalanches in a stochastic model
of spiking neurons. PLoS Computat. Biol. 6, 7 (2010), e1000846.
[20] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is
difficult. IEEE Trans. Neural Netw. 5, 2 (1994), 157–166.
[21] Ben Varkey Benjamin, Peiran Gao, Emmett McQuinn, Swadesh Choudhary, Anand R. Chandrasekaran, Jean-Marie
Bussat, Rodrigo Alvarez-Icaza, John V. Arthur, Paul A. Merolla, and Kwabena Boahen. 2014. NeuroGrid: A mixed-
analog-digital multichip system for large-scale neural simulations. Proc. IEEE 102, 5 (2014), 699–716.
[22] R. Benosman, C. Clercq, X. Lagorce, S. Ieng, and C. Bartolozzi. 2014. Event-based visual flow. IEEE Trans. Neural Netw.
Learn. Syst. 25, 2 (2014), 407–417. DOI:https://doi.org/10.1109/TNNLS.2013.2273537
[23] Ryad Benosman, Sio-Hoi Ieng, Charles Clercq, Chiara Bartolozzi, and Mandyam Srinivasan. 2012. Asynchronous
frameless event-based optical flow. Neural Netw. 27 (2012), 32–37.
[24] Roberto Bez. 2009. Chalcogenide PCM: A memory technology for next decade. In IEEE International Electron Devices
Meeting. IEEE, 1–4.
[25] Guo-qiang Bi and Mu-ming Poo. 2001. Synaptic modification by correlated activity: Hebb’s postulate revisited. Ann.
Rev. Neurosci. 24, 1 (2001), 139–166.
[26] Olivier Bichler, Manan Suri, Damien Querlioz, Dominique Vuillaume, Barbara DeSalvo, and Christian Gamrat. 2012.
Visual pattern extraction using energy-efficient “2-PCM synapse” neuromorphic architecture. IEEE Trans. Electron
Dev. 59, 8 (2012), 2206–2214.
[27] Kwabena A. Boahen. 2000. Point-to-point connectivity between neuromorphic chips using address events. IEEE Trans.
Circ. Syst. II: Analog Digit. Sig. Process. 47, 5 (2000), 416–434.
[28] Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran,
Yusuf Leblebici, Abu Sebastian, and Evangelos Eleftheriou. 2018. Neuromorphic computing with multi-memristive
synapses. Nature Commun. 9, 1 (2018), 2514.
[29] C. Brandli, R. Berner, M. Yang, S. Liu, and T. Delbruck. 2014. A 240×180 130 dB 3 μs latency global shutter spatiotem-
poral vision sensor. IEEE J. Solid-state Circ. 49, 10 (2014), 2333–2341.
[30] Karla Burelo, Mohammadali Sharifshazileh, Niklaus Krayenbühl, Georgia Ramantani, Giacomo Indiveri, and Jo-
hannes Sarnthein. 2021. A spiking neural network (SNN) for detecting high frequency oscillations (HFOs) in the
intraoperative ECoG. Sci. Rep. 11, 1 (2021), 1–10.
[31] Geoffrey W. Burr, Robert M. Shelby, Abu Sebastian, Sangbum Kim, Seyoung Kim, Severin Sidler, Kumar Virwani,
Masatoshi Ishii, Pritish Narayanan, Alessandro Fumarola et al. 2017. Neuromorphic computing using non-volatile
memory. Adv. Phys. X 2, 1 (2017), 89–124.
[32] Fuxi Cai et al. 2019. A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate
operations. Nature Electron. 2, 7 (2019), 290–299.
[33] Luis Camunas-Mesa, Antonio Acosta-Jiménez, Teresa Serrano-Gotarredona, and Bernabé Linares-Barranco. 2008.
Fully digital AER convolution chip for vision processing. In IEEE International Symposium on Circuits and Systems.
IEEE, 652–655.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:40 N. Rathi et al.
[34] Yongqiang Cao, Yang Chen, and Deepak Khosla. 2015. Spiking deep convolutional neural networks for energy-
efficient object recognition. Int. J. Comput. Vis. 113, 1 (2015), 54–66.
[35] Snaider Carrillo, Jim Harkin, L. J. McDaid, Sandeep Pande, Seamus Cawley, Brian McGinley, and Fearghal Mor-
gan. 2012. Hierarchical network-on-chip and traffic compression for spiking neural network implementations. In
IEEE/ACM 6th International Symposium on Networks-on-Chip. IEEE, 83–90.
[36] Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th
IWSLT evaluation campaign, IWSLT 2014. In International Workshop on Spoken Language Translation.
[37] I. Chakraborty, A. Jaiswal, A. K. Saha, S. K. Gupta, and K. Roy. 2020. Pathways to efficient neuromorphic computing
with non-volatile memory technologies. Appl. Phys. Rev. 7, 2 (2020), 021308.
[38] Indranil Chakraborty, Gobinda Saha, and Kaushik Roy. 2019. Photonic in-memory computing primitive for spiking
neural networks using phase-change materials. Phys. Rev. Appl. 11, 1 (2019), 014063.
[39] Indranil Chakraborty, Gobinda Saha, Abhronil Sengupta, and Kaushik Roy. 2018. Toward fast neural computing using
all-photonic phase change spiking neurons. Sci. Rep. 8, 1 (2018), 12980.
[40] Zengguang Cheng, Carlos Ríos, Wolfram H. P. Pernice, C. David Wright, and Harish Bhaskaran. 2017. On-chip
photonic synapse. Sci. Adv. 3, 9 (2017), e1700160.
[41] Elisabetta Chicca, Fabio Stefanini, Chiara Bartolozzi, and Giacomo Indiveri. 2014. Neuromorphic electronic circuits
for building autonomous cognitive systems. Proc. IEEE 102, 9 (2014), 1367–1388.
[42] Leon Chua. 1971. Memristor-the missing circuit element. IEEE Trans. Circ. Theor. 18, 5 (1971), 507–519.
[43] Wonil Chung, Mengwei Si, and D. Ye Peide. 2018. First demonstration of Ge ferroelectric nanowire FET as synaptic
device for online learning in neural network with high number of conductance state and G max/G min. In IEEE
International Electron Devices Meeting (IEDM). IEEE, 15–2.
[44] Gregory Cohen, Saeed Afshar, Brittany Morreale, Travis Bessell, Andrew Wabnitz, Mark Rutten, and André van
Schaik. 2019. Event-based sensing for space situational awareness. J. Astronaut. Sci. 66 (01 2019). DOI:https://doi.
org/10.1007/s40295-018-00140-5
[45] Mike Davies et al. 2021. Taking neuromorphic computing to the next level with Loihi 2. In Intel Newsroom Technol-
ogy brief. Retrieved from https://download.intel.com/newsroom/2021/new-technologies/neuromorphic-computing-
loihi-2-brief.pdf.
[46] Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios
Dimou, Prasad Joshi, Nabil Imam, Shweta Jain et al. 2018. Loihi: A neuromorphic manycore processor with on-chip
learning. IEEE Micro 38, 1 (2018), 82–99.
[47] Peter Dayan and Laurence F. Abbott. 2005. Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. MIT Press.
[48] Arnaud Delorme et al. 2001. Networks of integrate-and-fire neurons using Rank Order Coding B: Spike timing de-
pendent plasticity and emergence of orientation selectivity. Neurocomputing 38 (2001), 539–545.
[49] Yiğit Demirağ, Filippo Moro, Thomas Dalgaty, Gabriele Navarro, Charlotte Frenkel, Giacomo Indiveri, Elisa Vianello,
and Melika Payvand. 2021. PCM-trace: Scalable synaptic eligibility traces with resistivity drift of phase-change ma-
terials. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5.
[50] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image
database. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 248–255.
[51] Brian DePasquale, Mark M. Churchland, and L. F. Abbott. 2016. Using firing-rate dynamics to train recurrent net-
works of spiking model neurons. arXiv preprint arXiv:1601.07620 (2016).
[52] Peter U. Diehl and Matthew Cook. 2015. Unsupervised learning of digit recognition using spike-timing-dependent
plasticity. Front. Computat. Neurosci. 9 (2015), 99.
[53] Peter U. Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Shih-Chii Liu, and Michael Pfeiffer. 2015. Fast-classifying,
high-accuracy spiking deep networks through weight and threshold balancing. In International Joint Conference on
Neural Networks. IEEE, 1–8.
[54] Peter U. Diehl, Bruno U. Pedroni, Andrew Cassidy, Paul Merolla, Emre Neftci, and Guido Zarrella. 2016. TrueHappi-
ness: Neuromorphic emotion recognition on TrueNorth. In International Joint Conference on Neural Networks. IEEE,
4278–4285.
[55] Elisa Donati, Melika Payvand, Nicoletta Risi, Renate Krause, and Giacomo Indiveri. 2019. Discrimination of EMG
signals using a neuromorphic implementation of a spiking neural network. IEEE Trans. Biomed. Circ. Syst. 13,
5 (2019), 795–803.
[56] Sourav Dutta, Atanu K. Saha, Priyadarshini Panda, W. Chakraborty, J. Gomez, Abhishek Khanna, Sumeet Gupta,
Kaushik Roy, and Suman Datta. 2019. Biologically plausible energy-efficient ferroelectric quasi-leaky integrate and
fire neuron. In Symposium on VLSI Technology.
[57] Salah El Hihi and Yoshua Bengio. 1996. Hierarchical recurrent neural networks for long-term dependencies. In In-
ternational Conference on Advances in Neural Information Processing Systems. 493–499.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:41
[58] Sukru B. Eryilmaz, Duygu Kuzum, Rakesh Jeyasingh, SangBum Kim, Matthew BrightSky, Chung Lam, and
H.-S. Philip Wong. 2014. Brain-like associative learning using a nanoscale non-volatile phase change synaptic de-
vice array. Front. Neurosci. 8 (2014), 205.
[59] S. Burc Eryilmaz, Duygu Kuzum, Rakesh G. D. Jeyasingh, SangBum Kim, Matthew BrightSky, Chung Lam, and
H.-S. Philip Wong. 2013. Experimental demonstration of array-level learning with phase change synaptic devices. In
IEEE International Electron Devices Meeting. IEEE, 25–5.
[60] Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi
Kohno, and Dawn Song. 2018. Robust physical-world attacks on deep learning visual classification. In IEEE Conference
on Computer Vision and Pattern Recognition. 1625–1634.
[61] Ethan Farquhar and Paul Hasler. 2005. A bio-physically inspired silicon neuron. IEEE Trans. Circ. Syst. I: Reg. Papers
52, 3 (2005), 477–488.
[62] J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran, and W. H. P. Pernice. 2019. All-optical spiking neurosynaptic
networks with self-learning capabilities. Nature 569, 7755 (2019), 208.
[63] Paul Ferré, Franck Mamalet, and Simon J. Thorpe. 2018. Unsupervised feature learning with winner-takes-all based
STDP. Front. Computat. Neurosci. 12 (2018), 24.
[64] Fopefolu Folowosele, Ralph Etienne-Cummings, and Tara Julia Hamilton. 2009. A CMOS switched capacitor imple-
mentation of the Mihalas-Niebur neuron. In IEEE Biomedical Circuits and Systems Conference. IEEE, 105–108.
[65] Xuanyao Fong, Yusung Kim, Karthik Yogendra, Deliang Fan, Abhronil Sengupta, Anand Raghunathan, and Kaushik
Roy. 2016. Spin-transfer torque devices for logic and memory: Prospects and perspectives. IEEE Trans. Comput.-aid.
Des. Integ. Circ. Syst. 35, 1 (2016), 1–22.
[66] Nicolas Frémaux and Wulfram Gerstner. 2016. Neuromodulated spike-timing-dependent plasticity, and theory of
three-factor learning rules. Front. Neural Circ. 9 (2016), 85.
[67] Max Garagnani, Guglielmo Lucchese, Rosario Tomasello, Thomas Wennekers, and Friedemann Pulvermüller. 2017. A
spiking neurocomputational model of high-frequency oscillatory brain responses to words and pseudowords. Front.
Computat. Neurosci. 10 (2017), 145.
[68] Nikhil Garg, Ismael Balafrej, Yann Beilliard, Dominique Drouin, Fabien Alibart, and Jean Rouat. 2021. Signals to spikes
for neuromorphic regulated reservoir computing and EMG hand gesture recognition. In International Conference on
Neuromorphic Systems. 1–8.
[69] Wulfram Gerstner and Werner M. Kistler. 2002. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cam-
bridge University Press.
[70] Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski. 2014. Neuronal Dynamics: From Single
Neurons to Networks and Models of Cognition. Cambridge University Press.
[71] Aditya Gilra and Wulfram Gerstner. 2017. Predicting non-linear dynamics by stable local learning in a recurrent
spiking neural network. eLife 6 (Nov. 2017), e28295. DOI:https://doi.org/10.7554/eLife.28295
[72] Ludovic Goux, Piotr Czarnecki, Yang Yin Chen, Luigi Pantisano, XinPeng Wang, Robin Degraeve, Bogdan Govoreanu,
Malgorzata Jurczak, D. J. Wouters, and Laith Altimime. 2010. Evidences of oxygen-mediated resistive-switching
mechanism in TiN\HfO 2\Pt cells. Appl. Phys. Lett. 97, 24 (2010), 243509.
[73] Michael Graupner and Nicolas Brunel. 2012. Calcium-based plasticity model explains sensitivity of synaptic changes
to spike pattern, rate, and dendritic location. Proc. Nat. Acad. Sci. 109, 10 (2012), 3991–3996.
[74] Ankur Gupta and Lyle N. Long. 2007. Character recognition using spiking neural networks. In International Joint
Conference on Neural Networks. IEEE, 53–58.
[75] G. Haessig, A. Cassidy, R. Alvarez, R. Benosman, and G. Orchard. 2018. Spiking optical flow for event-based sensors
using IBM’s TrueNorth neurosynaptic system. IEEE Trans. Biomed. Circ. Syst. 12, 4 (2018), 860–870. DOI:https://doi.
org/10.1109/TBCAS.2018.2834558
[76] Bing Han and Kaushik Roy. 2020. Deep spiking neural network: Energy efficiency through time based coding. In
European Conference on Computer Vision.
[77] Bing Han, Gopalakrishnan Srinivasan, and Kaushik Roy. 2020. RMP-SNNs: Residual membrane potential neuron for
enabling deeper high-accuracy and low-latency spiking neural networks. In IEEE Conference on Computer Vision and
Pattern Recognition (CVPR).
[78] Paul E. Hasler, Chris Diorio, Bradley A. Minch, and Carver Mead. 1995. Single transistor learning synapses. In Inter-
national Conference on Advances in Neural Information Processing Systems. 817–824.
[79] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In
IEEE Conference on Computer Vision and Pattern Recognition. 770–778.
[80] Atsufumi Hirohata, Hiroaki Sukegawa, Hideto Yanagihara, Igor Žutić, Takeshi Seki, Shigemi Mizukami, and Raja
Swaminathan. 2015. Roadmap for emerging materials for Spintronic device applications. IEEE Trans. Magnet. 51,
10 (2015), 1–11.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:42 N. Rathi et al.
[81] Sepp Hochreiter, Yoshua Bengio, Paolo Frasconi, Jürgen Schmidhuber et al. 2001. Gradient flow in recurrent nets:
The difficulty of learning long-term dependencies. A Field Guide to Dynamical Recurrent Neural Networks, IEEE Press.
[82] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computat. 9, 8 (1997), 1735–1780.
[83] Alan L. Hodgkin and Andrew F. Huxley. 1952. A quantitative description of membrane current and its application
to conduction and excitation in nerve. J. Physiol. 117, 4 (1952), 500–544.
[84] Miao Hu, Yiran Chen, J. Joshua Yang, Yu Wang, and Hai Helen Li. 2016. A compact memristor-based dynamic synapse
for spiking neural networks. IEEE Trans. Comput.-aid. Des. Integ. Circ. Syst. 36, 8 (2016), 1353–1366.
[85] Eric Hunsberger and Chris Eliasmith. 2015. Spiking deep networks with LIF neurons. arXiv preprint arXiv:1510.08829
(2015).
[86] S. Ikeda, J. Hayakawa, Y. Ashizawa, Y. M. Lee, K. Miura, H. Hasegawa, M. Tsunoda, F. Matsukura, and H. Ohno. 2008.
Tunnel magnetoresistance of 604% at 300 K by suppression of Ta diffusion in Co Fe B/ Mg O/ Co Fe B pseudo-spin-
valves annealed at high temperature. Appl. Phys. Lett. 93, 8 (2008), 082508.
[87] Giacomo Indiveri. 2000. Modeling selective attention using a neuromorphic analog VLSI device. Neural Computat.
12, 12 (2000), 2857–2880.
[88] Giacomo Indiveri. 2021. Introducing “neuromorphic computing and engineering.” Neuromorph. Comput. Eng. 1,
1 (2021), 010401.
[89] Giacomo Indiveri, Elisabetta Chicca, and Rodney J. Douglas. 2006. A VLSI array of low-power spiking neurons and
bistable synapses with spike-timing dependent plasticity. IEEE Trans. Neural Netw. 17, 1 (2006).
[90] Giacomo Indiveri, Bernabé Linares-Barranco, Tara Julia Hamilton, André Van Schaik, Ralph Etienne-Cummings,
Tobi Delbruck, Shih-Chii Liu, Piotr Dudek, Philipp Häfliger, Sylvie Renaud et al. 2011. Neuromorphic silicon neuron
circuits. Front. Neurosci. 5 (2011), 73.
[91] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[92] Eugene M. Izhikevich. 2003. Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 6 (2003), 1569–1572.
[93] Akhilesh Jaiswal, Sourjya Roy, Gopalakrishnan Srinivasan, and Kaushik Roy. 2017. Proposal for a leaky-integrate-fire
spiking neuron based on magnetoelectric switching of ferromagnets. IEEE Trans. Electron Dev. 64, 4 (2017), 1818–1824.
[94] Matthew Jerry, Pai-Yu Chen, Jianchi Zhang, Pankaj Sharma, Kai Ni, Shimeng Yu, and Suman Datta. 2017. Ferroelectric
FET analog synapse for acceleration of deep neural network training. In IEEE International Electron Devices Meeting
(IEDM). IEEE, 6–2.
[95] Yu Ji, YouHui Zhang, ShuangChen Li, Ping Chi, CiHang Jiang, Peng Qu, Yuan Xie, and WenGuang Chen. 2016.
NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints. In 49th An-
nual IEEE/ACM International Symposium on Microarchitecture (MICRO). 1–13. DOI:https://doi.org/10.1109/MICRO.
2016.7783724
[96] Yingyezhe Jin, Wenrui Zhang, and Peng Li. 2018. Hybrid macro/micro level backpropagation for training deep spiking
neural networks. In Advances in Neural Information Processing Systems 31. Curran Associates, Inc., 7005–7015.
[97] Sung Hyun Jo, Ting Chang, Idongesit Ebong, Bhavitavya B. Bhadviya, Pinaki Mazumder, and Wei Lu. 2010. Nanoscale
memristor device as synapse in neuromorphic systems. Nano Lett. 10, 4 (2010), 1297–1301.
[98] Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates,
Suresh Bhatia, Nan Boden, Al Borchers et al. 2017. In-datacenter performance analysis of a tensor processing unit.
In ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). IEEE, 1–12.
[99] Jacques Kaiser, Hesham Mostafa, and Emre Neftci. 2020. Synaptic plasticity dynamics for deep continuous local
learning (DECOLLE). Front. Neurosci. 14 (2020), 424.
[100] Georg B. Keller and Thomas D. Mrsic-Flogel. 2018. Predictive processing: A canonical cortical computation. Neuron
100, 2 (2018), 424–435. DOI:https://doi.org/10.1016/j.neuron.2018.10.003
[101] Richard Kempter, Wulfram Gerstner, and J. Leo Van Hemmen. 1999. Hebbian learning and spiking neurons. Phys.
Rev. E 59, 4 (1999), 4498.
[102] Riduan Khaddam-Aljameh, Milos Stanisavljevic, Jordi Fornt Mas, Geethan Karunaratne, Matthias Brändli, Feng Liu,
Abhairaj Singh, Silvia M. Müller, Urs Egger, Anastasios Petropoulos et al. 2022. HERMES-Core–A 1.59-TOPS/mm 2
PCM on 14-nm CMOS in-memory compute core using 300-ps/LSB linearized CCO-based ADCs. IEEE J. Solid-state
Circ. 57, 4 (2022), 1027–1038.
[103] Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J. Thorpe, and Timothée Masquelier. 2018. STDP-based
spiking deep convolutional neural networks for object recognition. Neural Netw. 99 (2018), 56–67.
[104] S. Kim, M. Ishii, S. Lewis, T. Perri, M. BrightSky, W. Kim, R. Jordan, G. W. Burr, N. Sosa, A. Ray et al. 2015. NVM
neuromorphic core with 64k-cell (256-by-256) phase change memory synaptic array with on-chip neuron circuits
for continuous in-situ learning. In IEEE International Electron Devices Meeting (IEDM). IEEE, 17–1.
[105] Isabell Kiral-Kornek, Dulini Mendis, Ewan S. Nurse, Benjamin S. Mashford, Dean R. Freestone, David B. Grayden,
and Stefan Harrer. 2017. TrueNorth-enabled real-time classification of EEG data for brain-computer interfacing. In
39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 1648–1651.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:43
[106] Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.
Citeseer.
[107] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neu-
ral networks. In International Conference on Advances in Neural Information Processing Systems. 1097–1105.
[108] Alexander Kugele, Thomas Pfeil, Michael Pfeiffer, and Elisabetta Chicca. 2020. Efficient processing of spatio-temporal
data streams with spiking neural networks. Front. Neurosci. 14 (2020), 439.
[109] Kaushalya Kumarasinghe, Nikola Kasabov, and Denise Taylor. 2021. Brain-inspired spiking neural networks for
decoding and understanding muscle activity and kinematics from electroencephalography signals during hand move-
ments. Sci. Rep. 11, 1 (2021), 1–15.
[110] Alexey Kurakin, Ian Goodfellow, Samy Bengio et al. 2018. Adversarial examples in the physical world. Artificial
Intelligence Safety and Security. Chapman and Hall/CRC, 99–112.
[111] Duygu Kuzum, Rakesh G. D. Jeyasingh, Byoungil Lee, and H.-S. Philip Wong. 2011. Nanoelectronic programmable
synapses based on phase change materials for brain-inspired computing. Nano Lett. 12, 5 (2011), 2179–2186.
[112] Duygu Kuzum, Shimeng Yu, and H. S. Philip Wong. 2013. Synaptic electronics: Materials, devices and applications.
Nanotechnology 24, 38 (2013), 382001.
[113] Raphael Lamprecht and Joseph LeDoux. 2004. Structural plasticity and memory. Nature Rev. Neurosci. 5, 1 (2004), 45.
[114] S. Lashkare, S. Chouhan, T. Chavan, A. Bhat, P. Kumbhare, and U. Ganguly. 2018. PCMO RRAM for integrate-and-fire
neuron in spiking neural networks. IEEE Electron Dev. Lett. 39, 4 (2018), 484–487.
[115] John Lazzaro and John Wawrzynek. 1994. Low-power silicon neurons, axons and synapses. In Silicon Implementation
of Pulse Coded Neural Networks. Springer, 153–164.
[116] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521, 7553 (2015), 436.
[117] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document
recognition. Proc. IEEE 86, 11 (1998), 2278–2324.
[118] Chankyu Lee, Adarsh Kumar Kosta, Alex Zihao Zhu, Kenneth Chaney, Kostas Daniilidis, and Kaushik Roy. 2020.
Spike-FlowNet: Event-based optical flow estimation with energy-efficient hybrid neural networks. In Computer Vi-
sion – ECCV 2020. Springer International Publishing, Cham, 366–382.
[119] Chankyu Lee, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy. 2018. Training deep spiking con-
volutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Front.
Neurosci. 12 (2018), 435.
[120] Chankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopalakrishnan Srinivasan, and Kaushik Roy. 2020. Enabling
spike-based backpropagation for training deep neural network architectures. Front. Neurosci. 14 (2020).
[121] C. Lee, G. Srinivasan, P. Panda, and K. Roy. 2018. Deep spiking convolutional neural network trained with unsuper-
vised spike timing dependent plasticity. IEEE Trans. Cog. Devel. Syst. (2018), 1–1.
[122] Dongsoo Lee and Kaushik Roy. 2013. Area efficient ROM-embedded SRAM cache. IEEE Transactions on Cognitive and
Developmental Systems 11, 3 (2018), 384–394.
[123] Jun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. 2016. Training deep spiking neural networks using backpropa-
gation. Front. Neurosci. 10 (2016), 508.
[124] A. S. Lele, Y. Fang, J. Ting, and A. Raychowdhury. 2020. Learning to walk: Spike based reinforcement learning for
hexapod robot central pattern generation. In 2nd IEEE International Conference on Artificial Intelligence Circuits and
Systems (AICAS). 208–212. DOI:https://doi.org/10.1109/AICAS48895.2020.9073987
[125] R. Gary Leonard and George Doddington. 1993. Tidigits speech corpus. In Texas Instruments, Inc. Linguistic Data
Consortium.
[126] Can Li, Miao Hu, Yunning Li, Hao Jiang, Ning Ge, Eric Montgomery, Jiaming Zhang, Wenhao Song, Noraica Dávila,
Catherine E. Graves et al. 2018. Analogue signal and image processing with large memristor crossbars. Nature Elec-
tron. 1, 1 (2018), 52.
[127] Hongmin Li, Hanchao Liu, Xiangyang Ji, Guoqi Li, and Luping Shi. 2017. CIFAR10-DVS: An event-stream dataset
for object classification. Front. Neurosci. 11 (2017), 309.
[128] Ling Liang, Xing Hu, Lei Deng, Yujie Wu, Guoqi Li, Yufei Ding, Peng Li, and Yuan Xie. 2020. Exploring adversarial
attack in spiking neural networks with spike-compatible gradient. arXiv preprint arXiv:2001.01587 (2020).
[129] Patrick Lichtsteiner, Christoph Posch, and Tobi Delbruck. 2008. A 128× 128 120 dB 15μs latency asynchronous tem-
poral contrast vision sensor. IEEE J. Solid-state Circ. 43, 2 (2008), 566–576.
[130] S. Liu, B. Rueckauer, E. Ceolini, A. Huber, and T. Delbruck. 2019. Event-driven sensing for efficient perception: Vision
and audition algorithms. IEEE Sig. Process. Mag. 36, 6 (2019), 29–37. DOI:https://doi.org/10.1109/MSP.2019.2928127
[131] Ali Lotfi Rezaabad and Sriram Vishwanath. 2020. Long short-term memory spiking networks and their applications.
In International Conference on Neuromorphic Systems 2020. 1–9.
[132] Sen Lu and Abhronil Sengupta. 2020. Exploring the connection between binary and spiking neural networks. arXiv
preprint arXiv:2002.10064 (2020).
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:44 N. Rathi et al.
[133] Wolfgang Maass. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Netw.
10, 9 (1997), 1659–1671.
[134] Wolfgang Maass. 2011. Liquid state machines: Motivation, theory, and applications. In Computability in Context:
Computation and Logic in the Real World. World Scientific, 275–296.
[135] Wolfgang Maass and Henry Markram. 2004. On the computational power of circuits of spiking neurons. J. Comput.
Syst. Sci. 69, 4 (2004), 593–616.
[136] Wolfgang Maass, Thomas Natschläger, and Henry Markram. 2002. Real-time computing without stable states: A new
framework for neural computation based on perturbations. Neural Computat. 14, 11 (2002), 2531–2560.
[137] Misha Mahowald. 1994. An Analog VLSI System for Stereoscopic Vision, Vol. 265. Springer Science & Business Media.
[138] Misha Mahowald. 1994. The silicon retina. In An Analog VLSI System for Stereoscopic Vision. Springer, 4–65.
[139] Misha Mahowald and Rodney Douglas. 1991. A silicon neuron. Nature 354, 6354 (1991), 515.
[140] Stephen J. Martin, Paul D. Grimwood, and Richard G. M. Morris. 2000. Synaptic plasticity and memory: An evaluation
of the hypothesis. Ann. Rev. Neurosci. 23, 1 (2000), 649–711.
[141] Timothée Masquelier and Simon J. Thorpe. 2007. Unsupervised learning of visual features through spike timing
dependent plasticity. PLoS Computat. Biol. 3, 2 (2007), e31.
[142] Stephen J. Maybank, Sio-Hoi Ieng, Davide Migliore, and Ryad Benosman. 2021. Optical flow estimation using the
Fisher–Rao metric. Neuromorph. Comput. Eng. 1, 2 (2021), 024004.
[143] Carver Mead. 1990. Neuromorphic electronic systems. Proc. IEEE 78, 10 (1990), 1629–1636.
[144] Carver Mead. 2020. How we created neuromorphic engineering. Nature Electron. 3, 7 (2020), 434–435.
[145] Carver Mead and Mohammed Ismail. 2012. Analog VLSI Implementation of Neural Systems, Vol. 80. Springer Science
& Business Media.
[146] Adnan Mehonic and Anthony J. Kenyon. 2016. Emulating the electrical activity of the neuron using a silicon oxide
RRAM cell. Front. Neurosci. 10 (2016), 57.
[147] Paul Merolla, John Arthur, Rodrigo Alvarez, Jean-Marie Bussat, and Kwabena Boahen. 2014. A multicast tree router
for multichip neuromorphic systems. IEEE Trans. Circ. Syst. I: Reg. Papers 61, 3 (2014), 820–833.
[148] Alexander Meulemans, Matilde Tristany Farinha, Javier Garcia Ordonez, Pau Vilimelis Aceituno, João Sacramento,
and Benjamin F. Grewe. 2021. Credit assignment in neural networks through deep feedback control. Adv. Neural Inf.
Process. Syst. 34 (2021).
[149] Moritz Benjamin Milde. 2019. Spike-based Computational Primitives for Vision-based Scene Understanding. Ph.D. Dis-
sertation. University of Zurich.
[150] Moritz B. Milde, Olivier J. N. Bertrand, Harshawardhan Ramachandran, Martin Egelhaaf, and Elisabetta Chicca. 2018.
Spiking elementary motion detector in neuromorphic systems. Neural Computat 30, 9 (2018), 2384–2417.
[151] A. Mitrokhin, C. Fermüller, C. Parameshwara, and Y. Aloimonos. 2018. Event-based moving object detection and
tracking. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 1–9. DOI:https://doi.org/10.
1109/IROS.2018.8593805
[152] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P. Lillicrap, David
Silver, and Koray Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. In 33rd International
Conference on International Conference on Machine Learning (ICML’16). JMLR.org, 1928–1937.
[153] Saber Moradi, Nabil Imam, Rajit Manohar, and Giacomo Indiveri. 2013. A memory-efficient routing method for large-
scale spiking neural networks. In European Conference on Circuit Theory and Design (ECCTD). IEEE, 1–4.
[154] Saber Moradi, Ning Qiao, Fabio Stefanini, and Giacomo Indiveri. 2017. A scalable multicore architecture with het-
erogeneous memory structures for dynamic neuromorphic asynchronous processors (DYNAPs). IEEE Trans. Biomed.
Circ. Syst. 12, 1 (2017), 106–122.
[155] Hesham Mostafa. 2017. Supervised learning based on temporal coding in spiking neural networks. IEEE Trans. Neural
Netw. Learn. Syst. (2017). IEEE Transactions on Neural Networks and Learning Systems 29, 7 (2017), 3227–3235.
[156] Hesham Mostafa, Bruno U. Pedroni, Sadique Sheik, and Gert Cauwenberghs. 2017. Fast classification using sparsely
active spiking networks. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–4.
[157] Hesham Mostafa, Vishwajith Ramesh, and Gert Cauwenberghs. 2018. Deep supervised learning using local errors.
Front. Neurosci. 12 (2018), 608.
[158] E. Mueggler, B. Huber, and D. Scaramuzza. 2014. Event-based, 6-DOF pose tracking for high-speed maneuvers. In
IEEE/RSJ International Conference on Intelligent Robots and Systems. 2761–2768. DOI:https://doi.org/10.1109/IROS.
2014.6942940
[159] Halid Mulaosmanovic, Elisabetta Chicca, Martin Bertele, Thomas Mikolajick, and Stefan Slesazeck. 2018. Mimicking
biological neurons with a nanoscale ferroelectric transistor. Nanoscale 10, 46 (2018), 21755–21763.
[160] Halid Mulaosmanovic, Thomas Mikolajick, and Stefan Slesazeck. 2018. Accumulative polarization reversal in
nanoscale ferroelectric transistors. ACS Appl. Mater. Interf. 10, 28 (2018), 23997–24002.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:45
[161] H. Mulaosmanovic, J. Ocker, S. Müller, M. Noack, J. Müller, P. Polakowski, T. Mikolajick, and S. Slesazeck. 2017. Novel
ferroelectric FET based synapse for neuromorphic systems. In Symposium on VLSI Technology. IEEE, T176–T177.
[162] Nishant Mysore, Gopabandhu Hota, Stephen R. Deiss, Bruno Umbria Pedroni, and Gert Cauwenberghs. 2021. Hi-
erarchical network connectivity and partitioning for reconfigurable large-scale neuromorphic systems. Frontiers in
Neuroscience 1891, 15 (2021), 797654. DOI:10.3389/fnins.2021.797654
[163] Surya Narayanan, Karl Taht, Rajeev Balasubramonian, Edouard Giacomin, and Pierre-Emmanuel Gaillardon. 2020.
SpinalFlow: An architecture and dataflow tailored for spiking neural networks. In ACM/IEEE 47th Annual Interna-
tional Symposium on Computer Architecture (ISCA). IEEE, 349–362.
[164] Emre Neftci and Giacomo Indiveri. 2010. A device mismatch compensation method for VLSI neural networks. In
Biomedical Circuits and Systems Conference (BioCAS). IEEE, 262–265.
[165] Emre O. Neftci, Charles Augustine, Somnath Paul, and Georgios Detorakis. 2017. Event-driven random back-
propagation: Enabling neuromorphic deep learning machines. Front. Neurosci. 11 (2017), 324.
[166] Emre O. Neftci, Hesham Mostafa, and Friedemann Zenke. 2019. Surrogate gradient learning in spiking neural net-
works: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Sig. Process. Mag. 36,
6 (2019), 51–63.
[167] Wilten Nicola and Claudia Clopath. 2017. Supervised learning in spiking neural networks with FORCE training.
Nature Commun. 8, 1 (Dec. 2017), 2208. DOI:https://doi.org/10.1038/s41467-017-01827-3
[168] Ewan Nurse, Benjamin S. Mashford, Antonio Jimeno Yepes, Isabell Kiral-Kornek, Stefan Harrer, and Dean R. Free-
stone. 2016. Decoding EEG and LFP signals using deep learning: Heading TrueNorth. In ACM International Conference
on Computing Frontiers. ACM, 259–266.
[169] G. Orchard, R. Benosman, R. Etienne-Cummings, and N. V. Thakor. 2013. A spiking neural network architecture for
visual motion estimation. In IEEE Biomedical Circuits and Systems Conference (BioCAS). 298–301. DOI:https://doi.org/
10.1109/BioCAS.2013.6679698
[170] Garrick Orchard, Ajinkya Jayawant, Gregory K. Cohen, and Nitish Thakor. 2015. Converting static image datasets
to spiking neuromorphic datasets using saccades. Front. Neurosci. 9 (2015), 437.
[171] Eustace Painkras, Luis A. Plana, Jim Garside, Steve Temple, Francesco Galluppi, Cameron Patterson, David R. Lester,
Andrew D. Brown, and Steve B. Furber. 2013. SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural
network simulation. IEEE J. Solid-state Circ. 48, 8 (2013), 1943–1953.
[172] Giorgio Palma, Manan Suri, Damien Querlioz, Elisa Vianello, and Barbara De Salvo. 2013. Stochastic neuron design
using conductive bridge RAM. In IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH). IEEE,
95–100.
[173] Priyadarshini Panda, Jason M. Allred, Shriram Ramanathan, and Kaushik Roy. 2018. ASP: Learning to forget with
adaptive synaptic plasticity in spiking neural networks. IEEE J. Emerg. Select. Topics. Circ. Syst. 8, 1 (2018), 51–64.
[174] Priyadarshini Panda and Kaushik Roy. 2016. Unsupervised regenerative learning of hierarchical features in spiking
deep networks for object recognition. In International Joint Conference on Neural Networks. IEEE, 299–306.
[175] Priyadarshini Panda and Narayan Srinivasa. 2017. Learning to recognize actions from limited training examples
using a recurrent spiking neural model. CoRR abs/1710.07354 (2017). arXiv:1710.07354
[176] Angeliki Pantazi, Stanisław Woźniak, Tomas Tuma, and Evangelos Eleftheriou. 2016. All-memristive neuromorphic
computing with level-tuned neurons. Nanotechnology 27, 35 (2016), 355205.
[177] F. Paredes-Vallés, Kirk Y. W. Scheper, and G. de Croon. 2020. Unsupervised learning of a hierarchical spiking neural
network for optical flow estimation: From events to global motion perception. IEEE Trans. Pattern Anal. Mach. Intell.
42 (2020), 2051–2064.
[178] Maryam Parsa, J. Parker Mitchell, Catherine D. Schuman, Robert M. Patton, Thomas E. Potok, and Kaushik Roy. 2019.
Bayesian-based hyperparameter optimization for spiking neuromorphic systems. In IEEE International Conference on
Big Data (Big Data). 4472–4478. DOI:https://doi.org/10.1109/BigData47090.2019.9006383
[179] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks.
In International Conference on Machine Learning. 1310–1318.
[180] G. Pedretti, V. Milo, S. Ambrogio, R. Carboni, S. Bianchi, A. Calderoni, N. Ramaswamy, A. S. Spinelli, and D. Ielmini.
2017. Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plas-
ticity. Sci. Rep. 7, 1 (2017), 5288.
[181] Michael Pfeiffer and Thomas Pfeil. 2018. Deep learning with spiking neurons: Opportunities and challenges. Front.
Neurosci. 12 (2018), 774.
[182] Wachirawit Ponghiran and Kaushik Roy. 2021. Hybrid analog-spiking long short-term memory for energy efficient
computing on edge devices. In Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE.
[183] Wachirawit Ponghiran, Gopalakrishnan Srinivasan, and Kaushik Roy. 2019. Reinforcement learning with low-
complexity liquid state machines. Front. Neurosci. 13 (2019), 883. DOI:https://doi.org/10.3389/fnins.2019.00883
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:46 N. Rathi et al.
[184] Mirko Prezioso, F. Merrikh Bayat, Brian Hoskins, K. Likharev, and D. Strukov. 2016. Self-adaptive spike-time-
dependent plasticity of metal-oxide memristors. Sci. Rep. 6 (2016), 21331.
[185] M. Prezioso, M. R. Mahmoodi, F. Merrikh Bayat, H. Nili, H. Kim, A. Vincent, and D. B. Strukov. 2018. Spike-timing-
dependent plasticity learning of coincidence detection with passively integrated memristive circuits. Nature Com-
mun. 9, 1 (2018), 1–8.
[186] Guo qiang Bi and Mu ming Poo. 1998. Synaptic modifications in cultured hippocampal neurons: Dependence on
spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18, 24 (Dec. 1998), 10464–10472. DOI:https://
doi.org/10.1523/jneurosci.18-24-10464.1998
[187] Ning Qiao, Hesham Mostafa, Federico Corradi, Marc Osswald, Fabio Stefanini, Dora Sumislawska, and Giacomo
Indiveri. 2015. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K
synapses. Front. Neurosci. 9 (2015), 141.
[188] Damien Querlioz, Olivier Bichler, Adrien Francis Vincent, and Christian Gamrat. 2015. Bioinspired programming of
memory devices for implementing an inference engine. Proc. IEEE 103, 8 (2015), 1398–1416.
[189] Bipin Rajendran, Yong Liu, Jae-sun Seo, Kailash Gopalakrishnan, Leland Chang, Daniel J. Friedman, and Mark B.
Ritter. 2012. Specifications of nanoscale devices and circuits for neuromorphic computational systems. IEEE Trans.
Electron Dev. 60, 1 (2012), 246–253.
[190] Shubha Ramakrishnan, Paul E. Hasler, and Christal Gordon. 2011. Floating gate synapses with spike-time-dependent
plasticity. IEEE Trans. Biomed. Circ. Syst. 5, 3 (2011), 244–252.
[191] Venkat Rangan, Abhishek Ghosh, Vladimir Aparin, and Gert Cauwenberghs. 2010. A subthreshold aVLSI implemen-
tation of the Izhikevich simple neuron model. In Annual International Conference of the IEEE Engineering in Medicine
and Biology. IEEE, 4164–4167.
[192] Nitin Rathi, Priyadarshini Panda, and Kaushik Roy. 2018. STDP-based pruning of connections and weight quantiza-
tion in spiking neural networks for energy-efficient recognition. IEEE Trans. Comput.-aid. Des. Integ. Circ. Syst. 38,
4 (2018), 668–677.
[193] Nitin Rathi and Kaushik Roy. 2020. Diet-SNN: Direct input encoding with leakage and threshold optimization in
deep spiking neural networks. arXiv preprint arXiv:2008.03658 (2020).
[194] Nitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Panda, and Kaushik Roy. 2020. Enabling deep spiking neu-
ral networks with hybrid conversion and spike timing dependent backpropagation. In International Conference on
Learning Representations. Retrieved from https://openreview.net/forum?id=B1xSperKvH.
[195] Christina Rohde, Byung Joon Choi, Doo Seok Jeong, Seol Choi, Jin-Shi Zhao, and Cheol Seong Hwang. 2005. Identi-
fication of a determining parameter for resistive switching of Ti O 2 thin films. Appl. Phys. Lett. 86, 26 (2005), 262907.
[196] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image seg-
mentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. Springer International
Publishing, Cham, 234–241.
[197] Deboleena Roy, Indranil Chakraborty, and Kaushik Roy. 2019. Scaling deep spiking neural networks with binary
stochastic activations. In IEEE International Conference on Cognitive Computing (ICCC). IEEE, 50–58.
[198] Bodo Rueckauer and Shih-Chii Liu. 2018. Conversion of analog to spiking neural networks using sparse temporal
coding. In IEEE International Symposium on Circuits and Systems (ISCAS). IEEE, 1–5.
[199] Bodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michael Pfeiffer, and Shih-Chii Liu. 2017. Conversion
of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci.
11 (2017), 682.
[200] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
Aditya Khosla, Michael Bernstein et al. 2015. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis.
115, 3 (2015), 211–252.
[201] Sylvain Saïghi, Christian G. Mayr, Teresa Serrano-Gotarredona, Heidemarie Schmidt, Gwendal Lecerf, Jean Tomas,
Julie Grollier, Sören Boyn, Adrien F. Vincent, Damien Querlioz et al. 2015. Plasticity in memristive devices for spiking
neural networks. Front. Neurosci. 9 (2015), 51.
[202] Arash Samadi, Timothy P. Lillicrap, and Douglas B. Tweed. 2017. Deep learning with dynamic spiking neurons and
fixed feedback weights. Neural Computat. 29, 3 (2017), 578–602.
[203] Johannes Schemmel, Daniel Brüderle, Andreas Grübl, Matthias Hock, Karlheinz Meier, and Sebastian Millner. 2010.
A wafer-scale neuromorphic hardware system for large-scale neural modeling. In IEEE International Symposium on
Circuits and Systems (ISCAS). IEEE, 1947–1950.
[204] Abhronil Sengupta, Aparajita Banerjee, and Kaushik Roy. 2016. Hybrid Spintronic-CMOS spiking neural network
with on-chip learning: Devices, circuits, and systems. Phys. Rev. Appl. 6, 6 (2016), 064003.
[205] Abhronil Sengupta, Priyadarshini Panda, Parami Wijesinghe, Yusung Kim, and Kaushik Roy. 2016. Magnetic tunnel
junction mimics stochastic cortical spiking neurons. Sci. Rep. 6 (2016), 30039.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:47
[206] Abhronil Sengupta, Maryam Parsa, Bing Han, and Kaushik Roy. 2016. Probabilistic deep spiking neural systems
enabled by magnetic tunnel junction. IEEE Trans. Electron Dev. 63, 7 (2016), 2963–2970.
[207] Abhronil Sengupta and Kaushik Roy. 2016. A vision for all-spin neural networks: A device to system perspective.
IEEE Trans. Circ. Syst. I: Reg. Papers 63, 12 (2016), 2267–2277.
[208] Abhronil Sengupta and Kaushik Roy. 2017. Encoding neural and synaptic functionalities in electron spin: A pathway
to efficient neuromorphic computing. Appl. Phys. Rev. 4, 4 (2017), 041105.
[209] Abhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Kaushik Roy. 2019. Going deeper in spiking neural
networks: VGG and residual architectures. Front. Neurosci. 13 (2019).
[210] Kyungah Seo, Insung Kim, Seungjae Jung, Minseok Jo, Sangsu Park, Jubong Park, Jungho Shin, Kuyyadi P. Biju,
Jaemin Kong, Kwanghee Lee et al. 2011. Analog memory and spike-timing-dependent plasticity characteristics of a
nanoscale titanium oxide bilayer resistive switching device. Nanotechnology 22, 25 (2011), 254023.
[211] Rafael Serrano-Gotarredona, Teresa Serrano-Gotarredona, Antonio Acosta-Jimenez, and Bernab Linares-Barranco.
2006. A neuromorphic cortical-layer microchip for spike-based event processing vision systems. IEEE Trans. Circ.
Syst. I: Reg. Papers 53, 12 (2006), 2548–2566.
[212] Saima Sharmin, Nitin Rathi, Priyadarshini Panda, and Kaushik Roy. 2020. Inherent adversarial robustness of deep
spiking neural networks: Effects of discrete input encoding and non-linear activations. In European Conference on
Computer Vision. Springer, 399–414.
[213] Sumit Bam Shrestha and Garrick Orchard. 2018. SLAYER: Spike layer error reassignment in time. In International
Conference on Advances in Neural Information Processing Systems. 1412–1421.
[214] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrit-
twieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot et al. 2016. Mastering the game of Go with deep
neural networks and tree search. Nature 529, 7587 (2016), 484.
[215] Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556 (2014).
[216] S. Singh, A. Sharma, A. Pattnaik N. Jao, S. Lu, K. Yang, A. Sengupta, V. Narayanan, and C. Das. 2020. NEBULA: A
neuromorphic spin-based ultra-low power architecture for SNNs and ANNs. In IEEE/ACM International Symposium
on Computer Architecture (ISCA’20). IEEE.
[217] B. Son, Y. Suh, S. Kim, H. Jung, J. Kim, C. Shin, K. Park, K. Lee, J. Park, J. Woo, Y. Roh, H. Lee, Y. Wang, I. Ovsiannikov,
and H. Ryu. 2017. 4.1 A 640×480 dynamic vision sensor with a 9μm pixel and 300Meps address-event representation.
In IEEE International Solid-State Circuits Conference (ISSCC). 66–67. DOI:https://doi.org/10.1109/ISSCC.2017.7870263
[218] Sen Song, Kenneth D. Miller, and Larry F. Abbott. 2000. Competitive Hebbian learning through spike-timing-
dependent synaptic plasticity. Nature Neurosci. 3, 9 (2000), 919.
[219] Nicholas Soures and Dhireesha Kudithipudi. 2019. Deep liquid state machines with neural plasticity for video activity
recognition. Front. Neurosci. 13 (2019), 686. DOI:https://doi.org/10.3389/fnins.2019.00686
[220] Gopalakrishnan Srinivasan, Priyadarshini Panda, and Kaushik Roy. 2018. SpiLinC: Spiking liquid-ensemble comput-
ing for unsupervised speech and image recognition. Front. Neurosci. 12 (2018), 524. DOI:https://doi.org/10.3389/fnins.
2018.00524
[221] Gopalakrishnan Srinivasan, Priyadarshini Panda, and Kaushik Roy. 2018. STDP-based unsupervised feature learn-
ing using convolution-over-time in spiking neural networks for energy-efficient neuromorphic computing. ACM J.
Emerg. Technol. Comput. Syst. 14, 4 (2018), 1–12.
[222] Gopalakrishnan Srinivasan and Kaushik Roy. 2019. ReStoCNet: Residual stochastic binary convolutional spiking
neural network for memory-efficient neuromorphic computing. Front. Neurosci. 13 (2019), 189.
[223] Gopalakrishnan Srinivasan and Kaushik Roy. 2021. BlocTrain: Block-wise conditional training and inference for
efficient spike-based deep learning. Frontiers in Neuroscience 1891, 15 (2021), 603433. DOI:10.3389/fnins.2021.603433
[224] Gopalakrishnan Srinivasan, Abhronil Sengupta, and Kaushik Roy. 2016. Magnetic tunnel junction based long-term
short-term stochastic synapse for a spiking neural network with on-chip STDP learning. Sci. Rep. 6 (2016), 29545.
[225] Dmitri B. Strukov, Gregory S. Snider, Duncan R. Stewart, and R. Stanley Williams. 2008. The missing memristor
found. Nature 453, 7191 (2008), 80.
[226] Manan Suri, Damien Querlioz, Olivier Bichler, Giorgio Palma, Elisa Vianello, Dominique Vuillaume, Christian Gam-
rat, and Barbara DeSalvo. 2013. Bio-inspired stochastic computing using binary CBRAM synapses. IEEE Trans. Elec-
tron Dev. 60, 7 (2013), 2402–2409.
[227] Ilya Sutskever. 2013. Training Recurrent Neural Networks. University of Toronto, Toronto, Ontario, Canada.
[228] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception
architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition. 2818–2826.
[229] Aykut Tahtirvanci, Akif Durdu, and Burak Yilmaz. 2018. Classification of EEG signals using spiking neural networks.
In 26th Signal Processing and Communications Applications Conference (SIU). IEEE, 1–4.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
243:48 N. Rathi et al.
[230] Amirhossein Tavanaei, Timothée Masquelier, and Anthony S. Maida. 2016. Acquisition of visual features through
probabilistic spike-timing-dependent plasticity. In International Joint Conference on Neural Networks. IEEE, 307–314.
[231] Chetan Singh Thakur, Jamal Lottier Molin, Gert Cauwenberghs, Giacomo Indiveri, Kundan Kumar, Ning Qiao,
Johannes Schemmel, Runchun Wang, Elisabetta Chicca, Jennifer Olson Hasler et al. 2018. Large-scale neuromorphic
spiking array processors: A quest to mimic the brain. Front. Neurosci. 12 (2018), 891.
[232] Johannes C. Thiele, Olivier Bichler, and Antoine Dupret. 2018. Event-based, timescale invariant unsupervised online
deep learning with STDP. Front. Computat. Neurosci. 12 (2018), 46.
[233] Tomas Tuma, Manuel Le Gallo, Abu Sebastian, and Evangelos Eleftheriou. 2016. Detecting correlations using phase-
change neurons and synapses. IEEE Electron Dev. Lett. 37, 9 (2016), 1238–1241.
[234] Tomas Tuma, Angeliki Pantazi, Manuel Le Gallo, Abu Sebastian, and Evangelos Eleftheriou. 2016. Stochastic phase-
change neurons. Nature Nanotechnol. 11, 8 (2016), 693.
[235] Robert Urbanczik and Walter Senn. 2014. Learning by the dendritic prediction of somatic spiking. Neuron 81, 3 (2014),
521–528.
[236] André Van Schaik. 2001. Building blocks for electronic spiking neural networks. Neural Netw. 14, 6-7 (2001), 617–628.
[237] André van Schaik, Craig Jin, Alistair McEwan, and Tara Julia Hamilton. 2010. A log-domain implementation of the
Izhikevich neuron model. In IEEE International Symposium on Circuits and Systems. IEEE, 4253–4256.
[238] Adrien F. Vincent, Jérôme Larroque, Nicolas Locatelli, Nesrine Ben Romdhane, Olivier Bichler, Christian Gamrat,
Wei Sheng Zhao, Jacques-Olivier Klein, Sylvie Galdin-Retailleau, and Damien Querlioz. 2015. Spin-transfer torque
magnetic memory as a stochastic memristive synapse for neuromorphic systems. IEEE Trans. Biomed. Circ. Syst. 9,
2 (2015), 166–174.
[239] R. Jacob Vogelstein, Udayan Mallik, Eugenio Culurciello, Gert Cauwenberghs, and Ralph Etienne-Cummings. 2007.
A multichip neuromorphic system for spike-based visual information processing. Neural Computat. 19, 9 (2007),
2281–2300.
[240] R. Jacob Vogelstein, Udayan Mallik, Joshua T. Vogelstein, and Gert Cauwenberghs. 2007. Dynamically reconfigurable
silicon array of spiking neurons with conductance-based synapses. IEEE Trans. Neural Netw. 18, 1 (2007), 253–265.
[241] John Von Neumann. 2012. The Computer and the Brain. Yale University Press.
[242] Edward Wallace, Marc Benayoun, Wim Van Drongelen, and Jack D. Cowan. 2011. Emergent oscillations in networks
of stochastic spiking neurons. PLoS One 6, 5 (2011), e14804.
[243] I-Ting Wang, Yen-Chuan Lin, Yu-Fen Wang, Chung-Wei Hsu, and Tuo-Hung Hou. 2014. 3D synaptic architecture
with ultralow sub-10 fJ energy per spike for neuromorphic computation. In IEEE International Electron Devices Meet-
ing. IEEE, 28–5.
[244] Runchun Wang and André van Schaik. 2018. Breaking Liebig’s law: An advanced multipurpose neuromorphic engine.
Front. Neurosci. 12 (2018), 593.
[245] Yu Wang, Tianqi Tang, Lixue Xia, Boxun Li, Peng Gu, Huazhong Yang, Hai Li, and Yuan Xie. 2015. Energy efficient
RRAM spiking neural network for real time classification. In 25th Great Lakes Symposium on VLSI. ACM, 189–194.
[246] Zhongqiang Wang, Stefano Ambrogio, Simone Balatti, and Daniele Ielmini. 2015. A 2-transistor/1-resistor artificial
synapse capable of communication and stochastic learning in neuromorphic systems. Front. Neurosci. 8 (2015), 438.
[247] Zhongrui Wang, Saumil Joshi, Sergey Savel’ev, Wenhao Song, Rivu Midya, Yunning Li, Mingyi Rao, Peng Yan, Shiva
Asapu, Ye Zhuo et al. 2018. Fully memristive neural networks for pattern classification with unsupervised learning.
Nature Electron. 1, 2 (2018), 137.
[248] Yukio Watanabe, J. Ge Bednorz, A. Bietsch, Ch. Gerber, D. Widmer, A. Beck, and S. J. Wind. 2001. Current-driven
insulator–conductor transition and nonvolatile memory in chromium-doped SrTiO 3 single crystals. Appl. Phys. Lett.
78, 23 (2001), 3738–3740.
[249] Jayawan H. B. Wijekoon and Piotr Dudek. 2008. Compact silicon neuron circuit with spiking and bursting behaviour.
Neural Netw. 21, 2-3 (2008), 524–534.
[250] H.-S. Philip Wong, Simone Raoux, SangBum Kim, Jiale Liang, John P. Reifenberg, Bipin Rajendran, Mehdi Asheghi,
and Kenneth E. Goodson. 2010. Phase change memory. Proc. IEEE 98, 12 (2010), 2201–2227.
[251] H.-S. Philip Wong and Sayeef Salahuddin. 2015. Memory leads the way to better computing. Nature Nanotechnol. 10,
3 (2015), 191.
[252] Stanisław Woźniak, Angeliki Pantazi, Thomas Bohnstingl, and Evangelos Eleftheriou. 2020. Deep learning incorpo-
rating biologically inspired neural dynamics and in-memory computing. Nature Mach. Intell. 2, 6 (2020), 325–336.
[253] Hao Wu, Yueyi Zhang, Wenming Weng, Yongting Zhang, Zhiwei Xiong, Zheng-Jun Zha, Xiaoyan Sun, and Feng
Wu. 2021. Training spiking neural networks with accumulated spiking flow. Proceedings of the AAAI Conference on
Artificial Intelligence 35, 12 (2021), 10320–10328.
[254] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. 2018. Spatio-temporal backpropagation for training high-
performance spiking neural networks. Front. Neurosci. 12 (2018).
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.
Exploring Neuromorphic Computing Based on Spiking Neural Networks 243:49
[255] Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie, and Luping Shi. 2019. Direct training for spiking neural networks:
Faster, larger, better. In AAAI Conference on Artificial Intelligence. 1311–1318.
[256] Yannan Xing, Gaetano Di Caterina, and John Soraghan. 2020. A new spiking convolutional recurrent neural network
(SCRNN) with applications to event-based hand gesture recognition. Front. Neurosci. 14 (2020), 1143.
[257] Xiaowei Xu, Yukun Ding, Sharon Xiaobo Hu, Michael Niemier, Jason Cong, Yu Hu, and Yiyu Shi. 2018. Scaling for
edge inference of deep neural networks. Nature Electron. 1, 4 (2018), 216.
[258] F. Xue, Hang Guan, and X. Li. 2016. Improving liquid state machine with hybrid plasticity. In IEEE Advanced Infor-
mation Management, Communicates, Electronic and Automation Control Conference (IMCEC). 1955–1959. DOI:https://
doi.org/10.1109/IMCEC.2016.7867559
[259] Zhanglu Yan, Jun Zhou, and Weng-Fai Wong. 2021. Energy efficient ECG classification with spiking neural network.
Biomed. Sig. Process. Contr. 63 (2021), 102170.
[260] Chengxi Ye, Anton Mitrokhin, Cornelia Fermüller, James A. Yorke, and Yiannis Aloimonos. 2019. Unsupervised Learn-
ing of Dense Optical Flow, Depth and Egomotion from Sparse Event Data. (2019). arXiv:cs.CV/1809.08625
[261] Bojian Yin, Federico Corradi, and Sander M. Bohté. 2021. Accurate and efficient time-domain classification with
adaptive spiking recurrent neural networks. Nature Mach. Intell. 3, 10 (2021), 905–913.
[262] Shimeng Yu. 2018. Neuro-inspired computing with emerging nonvolatile memorys. Proc. IEEE 106, 2 (2018), 260–285.
[263] Theodore Yu and Gert Cauwenberghs. 2010. Analog VLSI biophysical neurons and synapses with programmable
membrane channel kinetics. IEEE Trans. Biomed. Circ. Syst. 4, 3 (2010), 139–148.
[264] Friedemann Zenke and Surya Ganguli. 2018. SuperSpike: Supervised learning in multilayer spiking neural networks.
Neural Computat. 30, 6 (2018), 1514–1541.
[265] Friedemann Zenke and Emre O Neftci. 2021. Brain-inspired learning on neuromorphic substrates. Proc. IEEE 109,
5 (2021), 935–950.
[266] Deming Zhang, Lang Zeng, Youguang Zhang, Weisheng Zhao, and Jacques Olivier Klein. 2016. Stochastic Spintronic
device based synapses and spiking neurons for neuromorphic computation. In IEEE/ACM International Symposium
on Nanoscale Architectures (NANOARCH). IEEE, 173–178.
[267] Ming Zhang, Zonghua Gu, Nenggan Zheng, De Ma, and Gang Pan. 2020. Efficient spiking neural networks with
logarithmic temporal coding. IEEE Access 8 (2020), 98156–98167.
[268] Wenrui Zhang and Peng Li. 2020. Temporal spike sequence learning via backpropagation for deep spiking neural
networks. arXiv preprint arXiv:2002.10085 (2020).
[269] Yang Zhang, Zhongrui Wang, Jiadi Zhu, Yuchao Yang, Mingyi Rao, Wenhao Song, Ye Zhuo, Xumeng Zhang, Menglin
Cui, Linlin Shen et al. 2020. Brain-inspired computing with memristors: Challenges in devices, circuits, and systems.
Appl. Phys. Rev. 7, 1 (2020), 011308.
[270] Alex Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. 2018. EV-FlowNet: Self-supervised optical flow
estimation for event-based cameras. Robot.: Sci. Syst. XIV (June 2018). DOI:https://doi.org/10.15607/rss.2018.xiv.062
[271] A. Z. Zhu, D. Thakur, T. Özaslan, B. Pfrommer, V. Kumar, and K. Daniilidis. 2018. The multivehicle stereo event
camera dataset: An event camera dataset for 3D perception. IEEE Robot. Autom. Lett. 3, 3 (2018), 2032–2039.
[272] A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis. 2019. Unsupervised event-based learning of optical flow, depth,
and egomotion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 989–997. DOI:https://
doi.org/10.1109/CVPR.2019.00108
[273] Mohammed A. Zidan, John Paul Strachan, and Wei D Lu. 2018. The future of electronics based on memristive systems.
Nature Electron. 1, 1 (2018), 22.
[274] Robert S. Zucker and Wade G. Regehr. 2002. Short-term synaptic plasticity. Ann. Rev. Physiol. 64, 1 (2002), 355–405.
[275] Rui Zuo, Jing Wei, Xiaonan Li, Chunlin Li, Cui Zhao, Zhaohui Ren, Ying Liang, Xinling Geng, Chenxi Jiang, Xiaofeng
Yang et al. 2019. Automated detection of high-frequency oscillations in epilepsy based on a convolutional neural
network. Front. Computat. Neurosci. 13 (2019), 6.
ACM Computing Surveys, Vol. 55, No. 12, Article 243. Publication date: March 2023.