Prakriti Report 1
Prakriti Report 1
ON
NEUROMORPHIC
COMPUTING
0101EC221096 Professor
In October 2015, the White House Office of Science and Technology Policy released A
Nanotechnology Inspired Grand Challenge for Future Computing, which states the following:
“Create a new type of computer that can proactively interpret and learn from data, solve
unfamiliar problems using what it has learned, and operate with the energy efficiency of the
human brain.” As a result, various federal agencies (DOE, NSF, DOD, NIST, and IC) collaborated
to deliver "A Federal Vision for Future Computing:
However, DOE within its mission should make neuromorphic computing a priority for
following important reasons:
In the von Neumann architecture, memory and computation are separated by a bus,
and both the data for the program at hand as well as the program itself has to be
transferred from memory to a central processing unit (CPU). As CPUs have grown
faster, memory access and transfer speeds have not improved at the same scale
[Hennessy2011]. Moreover, even CPU performance increases are slowing, as Moore’s
law, which states that the number of transistors on a chip doubles roughly every 2
years, is beginning to slow (if not plateau). Though there is some argument as to
whether Moore’s law has actually come to an end, there is a consensus that Dennard
scaling, which says that as transistors get smaller that power density stays constant,
ended around 2004 [Shalf2015]. As a consequence, energy consumption on chips has
increased as we continue to add transistors. While we are simultaneously experiencing
issues associated with the von Neumann bottleneck, the computation-memory gap,
the plateau of Moore’s law,
Figure 1
and the end of Dennard scaling, we are gathering data in greater quantities than ever
before. Data comes in a variety of forms and is gathered in vast quantities through a
plethora of mechanisms, including sensors in real environments, by companies,
organizations, and governments and from scientific instruments or simulations. Much
of this data sits idle in storage, is summarized for a researcher using statistical
techniques, or is thrown away completely because current computing resources and
associated algorithms cannot handle the scale of data that is being gathered.
Moreover, beyond intelligent data analysis needs, as computing has developed, the
types of problems we as users want computers to solve have expanded. In particular,
we are expecting more and more intelligent behaviour from our systems.
These issues and others have spurred the development of non–von Neumann architectures.
In particular, the goal of pursuing new architectures is not to find a replacement for the
traditional von Neumann paradigm but to find architectures and devices that can complement
the existing paradigm and help to address some of its weaknesses.
Neuromorphic architectures are one of the proposed complement architectures for several
reasons:
2. Neuromorphic architectures often result in lower power consumption (which can be a result
of analogy or mixed analogy - digital devices or due to the event-driven nature of the systems).
3. Common data analysis techniques, such as neural networks, have natural implementations
on neuromorphic devices and thus are applicable to many “big data” problems.
4. By building the architecture using brain-inspired components, there is potential that a type
of intelligent behaviour will emerge.
Because of the broad definition of neuromorphic computing and since the community spans a
large number of fields (neuroscience, computer science, engineering, and materials science), it
can be difficult to capture a full picture of the current state of neuromorphic computing research.
The goals and motivations for pursuing neuromorphic computing research vary widely from
One view of neuromorphic systems is that they represent one pole of a spectrum of reproable
computing platforms (Figure 1). On one end of that spectrum is the synchronous von Neumann
architecture. The number of cores or computational units increases in moving across this
Figure 2
One group of neuromorphic computing research is motivated by computational neuroscience and
thus is interested in building hardware and software systems capable of completing large-scale,
high-accuracy simulations of biological neural systems in order to better understand the biological
Though the primary motivation for these types of projects is to perform large-scale computational
neuroscience, many of the projects are also being studied as computing platforms, such as
computing research is motivated by accelerating existing deep learning networks and training and
thus is interested in building hardware that is customized specifically for certain types of neural
networks (e.g., convolutional neural networks) and certain types of training algorithms (e.g., back-
propagation). Deep learning achieves state-of-the-art results for a certain set of tasks (such as
image recognition and classification), but depending on the task, training on traditional CPUs can
take up to weeks and months. Most state-of-the-art results on deep learning have been obtained
by utilizing graphics processing units (GPUs) to perform the training process. Much of the deep
learning research in recent years has been motivated by commercial interests, and as such the
custom deep learning–based neuromorphic systems have been primarily created by industry
(e.g., Google’s Tensor Processing Unit [Jouppi2016] and the Nirvana Engine [Nervana2016]).
These systems fit the broad definition of neuromorphic computing in that they are neural-inspired
systems on non–von Neumann hardware. However, there are several characteristics of deep
learning–based systems that are undesirable in other neuromorphic systems, such as the reliance
The third and perhaps most common set of neuromorphic systems is motivated by developing
efficient neutrally inspired computational hardware systems, usually based on spiking and non-
spiking neural networks. These systems may include digital or analogy implementations of
neurons, synapses, and perhaps other biologically inspired components. Example systems in this
category include the True North system [Merolla2014], HRL’s Latigo chip [Supusepa], Neurogram
[Benjamin2014], and Darwin [Shen2016]. It is also worth noting that there are neuromorphic
implementations using off-the-shelf commodities, such as field programmable gate arrays
(FPGAs), which are useful as both prototypes systems and, because of their relative cost, have
real-world value as well. Neuromorphic Computing Architectures, Models, and Applications 9 One
of the most popular technologies associated with building neuromorphic systems is the
memristor (also known as ReRAM). There are two general types of memristors: non-volatile,
which is typically used to implement synapses, and locally active, which could be used to
represent a neuron or axon (Figure 2). Non-volatile memristors are also used to demonstrate
activation functions and other logical computations. Memristors used to implement synapses are
often used in a crossbar (Figure 3). The crossbar can operate in either a current mode or a voltage
mode, depending on the energy optimization constraints. The use of spiking neural networks as
a model in some neuromorphic systems allows these types of systems to have asynchronous,
event-driven computation, reducing energy overhead. Another use of these devices is to realize
a co-located dense memory for storing intermediate results within a network (such as weights).
Also, the use of technologies such as memristors have the potential to build large-scale systems
Figure 3
Open Issues
computer and electrical engineering, device physics, and materials science. Each research area
has a set of open research questions that must be addressed in order for the field to move
forward. The focus of the workshop was to identify the major questions from a computing
computational scientists, computer scientists, and mathematicians and whose solutions can
benefit from the use of high performance computing (HPC) resources. Based on the submissions
to the workshop and the presentations and discussions during the workshop, six major questions
emerged, each of which is described in the following sections and framed in the bigger picture of
a neuromorphic architecture (as shown in Figure 5). It is important to note that none of these
questions can be answered by computing alone and will require close collaboration with other
What are the basic computational building blocks and general architectures of
neuromorphicsystems?
Figure -4
The basic computational building blocks of most neuromorphic systems are neurons and
synapses. Some neuromorphic systems go further and include notions of axons, dendrites, and
other neural structures, but in general, neurons and synapses are the key components of the
spikes: whenever enough charge has flowed in at synapses, a neuron generates outgoing spikes,
which causes charge to be injected into the post-synaptic neuron. What neuron and synapse
models are appropriate? When defining a neuromorphic architecture or model, the associated
neuron and synapse models must be chosen from among a large variety of neuron and synapse
models. Example neuron models range from very simple (e.g., McCulloch Pitts [McCulloch1943])
ranging from very simple synapses that are only represented by weight values (such as those used
with McCulloch-Pitts neurons) to extremely complex, biologically realistic synapses that include
notions of learning or adaptation. Artificial synapses may or may not have a notion of delay,
depending on the implementation. Some synapse models represent synapse activity using a
conductance-based model [Vogelstein2007] rather than a weight value. Synaptic weights may be
adapted over the course of network simulation via learning mechanisms such as Hebbian-
neuromorphic system?
Beyond neurons and synapses, there are a variety of other biologically inspired mechanisms that
may be worth considering as computational primitives. Astrocytes are glial cells in biological
brains that act as regulatory systems; in particular, they can stimulate, calm, synchronize, and
repair neurons. Certainly we will want to consider what computational effects these regulatory-
systems also have a significant effect on the behaviour of biological brains. It is not clear how
these systems influence the capabilities of biological brains, such as learning, adaptation, whereas
hierarchical temporal memory (HTM) takes its inspiration from the organization of the neocortex
[Hawkins2016, WSP:Kudithipudi] (Figure 7). When considering the model selection, it may be
worthwhile to target a specific functionality or set of applications and take inspiration from a
particular part of the brain that performs that functionality well in determining the characteristics
of the model. and fault tolerance, but it is worthwhile to consider their inclusion in neuromorphic
models. In general, the goal should be to make minimalistic systems first and then to grow their
complexity as we understand how the systems operate together. If we start with the proposition
that it must be very, very complex to work, then we will surely fail. There are many different types
of neurons and synapses within biological neural systems, along with other biological components
such as glial cells; different areas of the brain have different neuron types, synapse types,
connectivity patterns, and supporting systems. Artificial neural networks have taken inspiration
from different areas of the brain for different types of neural networks. For example, the structure
of convolutional neural networks is inspired by the organization and layout of the visual cortex
[LeCun2015],
The primary focus of this report has been on computing aspects of neuromorphic computing.
necessary that we consider the hardware and device components as well. As compared to
conventional computing systems, neuromorphic computing systems and algorithms need higher
efficient and natural manner. Without efficient hardware implementations that leverage new
materials and devices, the real growth of neuromorphic applications will be substantially
hindered. As noted throughout this report, significant effort will be needed on the part of the
computing community to work with researchers in the areas neuromorphic materials, devices,
and circuitry. Neuromorphic systems will not and cannot be developed by a single field in
isolation. The decisions we make at the model and algorithm level should both inform and be
informed by what occurs at the materials, device, and circuitry levels. DOE has capabilities at all
levels of neuromorphic computing (models, algorithms, circuitry, devices, and materials) that can
There are several intermediate steps that the community can take to begin to
address these challenges.
1. Large-Scale Simulators: The first short-term goal is the development of large-
scale simulations that are capable of supporting different neuromorphic
architectures at different levels of abstraction. High-level simulations will be
important for examining and testing different computational building blocks,
as well as connectivity patterns and structural programmability
requirements. High-level simulations can be used to develop and test
theoretical results, as well as to help develop algorithms for training and
learning. Insights gained from these simulations can also inform device
design. Lower-level simulations are also likely to be important. Large-scale
simulations at the device level, circuit level, or even materials level can be
used in concert with Neuromorphic Computing Architectures, Models, and
Applications 34 small-scale prototypes of devices to provide insight as to the
capabilities and characteristics of the system, including energy efficiency
estimates.
Figure -5
FUTURE SCOPE
4. Applications: As data sets have continued to grow in recent years, the long-
term prognosis for data analysis across a broad range of applications appears
problematic. In particular, scientific data sets are typically very large with
numerous, dense, and highly interrelated measurements. One primary
characteristic of many of these scientific data sets is spatial and/or temporal
locality in the observation space.
In other words, it is rare that any two observations will be independent in both
time and space, and proximity of any two measurements in time and/or space
is highly relevant to an analysis of the data. Furthermore, the scale of the data
is typically so immense that many separate measurements are almost inevitably
represented as an average, sacrificing detail for tractability. A neuromorphic
computing paradigm may be ideally suited to address these challenges through
the ability to leverage both spatial and temporal locality.
Figure-6.
CONCLUSION
It is clear that neuromorphic computing will play a critical role in the landscape
of future computing. The feasibility of such systems has been demonstrated, but
there are still major research questions that need to be resolved in the
development and application of these systems. DOE can fill an important
national role in enabling solutions that address these important basic research
and development challenges.
REFERENCES