0% found this document useful (0 votes)
14 views26 pages

Prakriti Report 1

The seminar report on neuromorphic computing discusses its definition, current research state, and future scope, emphasizing the need for a new computing paradigm inspired by the human brain. It highlights the potential of neuromorphic systems to address limitations of traditional von Neumann architectures, particularly in terms of energy efficiency and computational capabilities. The report also identifies open research questions that require interdisciplinary collaboration to advance the field.

Uploaded by

bhargavaudyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views26 pages

Prakriti Report 1

The seminar report on neuromorphic computing discusses its definition, current research state, and future scope, emphasizing the need for a new computing paradigm inspired by the human brain. It highlights the potential of neuromorphic systems to address limitations of traditional von Neumann architectures, particularly in terms of energy efficiency and computational capabilities. The report also identifies open research questions that require interdisciplinary collaboration to advance the field.

Uploaded by

bhargavaudyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

SEMINAR REPORT

ON

NEUROMORPHIC
COMPUTING

SESSION : JAN-JUNE 2025

Submitted by: Submitted To:

Prakriti Tiwari Dr. Aparna Singh Kushwaha

0101EC221096 Professor

DEPARTMENT OF ELECTRONICS &


COMMUNICATION ENGINEERING
UNIVERSITY INSTITUTE OF TECHNOLOGY,
RAJIV GANDHI PRODYOGIKI VISHWAVIDYALAYA
CONTENTS

DETAIL PAGE No.

1. Introduction To Neuromorphic Computing 0–6


2. Current state of Neuromorphic Computing Research 7 – 12
3. Intermediate steps 12 – 16
4. Future Scope 16– 17
5. Conclusion 17-18
6. References 18-20
INTRODUCTION TO NEUROMORPHIC
COMPUTING

In October 2015, the White House Office of Science and Technology Policy released A
Nanotechnology Inspired Grand Challenge for Future Computing, which states the following:
“Create a new type of computer that can proactively interpret and learn from data, solve
unfamiliar problems using what it has learned, and operate with the energy efficiency of the
human brain.” As a result, various federal agencies (DOE, NSF, DOD, NIST, and IC) collaborated
to deliver "A Federal Vision for Future Computing:

A Nanotechnology-Inspired Grand Challenge" white paper presenting a collective vision with


respect to the emerging and innovative solutions needed to realize the Nanotechnology-
Inspired Grand Challenge for Future Computing. The white paper describes the technical
priorities shared by multiple federal agencies, highlights the challenges and opportunities
associated with these priorities, and presents a guiding vision for the R&D needed to achieve
key near-, mid-, and long-term technical goals. This challenge falls in line and is very synergistic
with the goals and vision of the neuromorphic computing community, which is to build an
intelligent, energy efficient system, where the inspiration and technological baseline for how
to design and build such a device comes from our recent progress in understanding of new
and exciting material physics, machine intelligence and understanding, biology, and the
human brain as an important example. Investment in current neuromorphic computing
projects has come from a variety of sources, including industry, foreign governments (e.g., the
European Union’s Human Brain Projects), and other government agencies (e.g., DARPA’s
Synapse, Physical Intelligence, UPSIDE, and other related programs).

However, DOE within its mission should make neuromorphic computing a priority for
following important reasons:

• The likelihood of fundamental scientific breakthroughs is real and driven by the


quest for neuromorphic computing and its ultimate realization. Fields that may
be impacted include neuroscience, machine intelligence, and materials
science.
• The commercial sector may not invest in the required high-risk/payoff research
of emerging technologies due to the long lead times for practical and effective
product development and marketing.

• Government applications for the most part different from commercial


applications; therefore, government needs will not be met if they rely on
technology derived from commercial products. Moreover, DOE’s applications
in particular are also fundamentally different from other government agency
applications.

• The long-term economic return of government investment in neuromorphic


computing will likely dwarf other investments that the government might
make.

• 5. The government’s long history of successful investment in computing


technology (probably the most valuable investment in history) is a proven case
study that is relevant to the opportunity in neuromorphic computing.

• 6. The massive, ongoing accumulation of data everywhere is an untapped


source of wealth and wellbeing for the nation.

What is neuromorphic computing?

Neuromorphic computing combines computing fields such as machine learning and


artificial intelligence with cutting-edge hardware development and materials science,
as well as ideas from neuroscience. In its original incarnation, “neuromorphic” was
used to refer to custom devices/chips that included analogy components and
mimicked biological neural activity [Mead1990]. Today, neuromorphic computing has
broadened to include a wide variety of software and hardware components, as well as
materials science, neuroscience, and computational neuroscience research. To
accommodate the expansion of the field, we propose the following definition to
describe the current state of neuromorphic computing:

Neural-inspired systems for non–von Neumann computational


architectures

In most instances, however, neuromorphic computing systems refer to devices with


the following properties:
• two basic components: neurons and synapses,
• co-located memory and computation,
• simple communication between components, and
• learning in the components.
Additional characteristics that some (though not all) neuromorphic systems include
are
• nonlinear dynamics,
• high fan-in/fan-out components,
• spiking behaviour,
• the ability to adapt and learn through plasticity of both parameters, events, and
structure,
• robustness, and
• the ability to handle noisy or incomplete input.
Neuromorphic systems also have tended to emphasize temporal interactions; the
operation of these systems tend to be event driven. Several properties of
neuromorphic systems (including event-driven behaviour) allow for low-power
implementations, even in digital systems. The wide variety of characteristics of
neuromorphic systems indicates that there are a large number of design choices that
must be addressed by the community with input from neurophysiologists,
computational neuroscientists, biologists, computer scientists, device engineers,
circuit designers, and material scientists.
Why now?

In 1978, Backus described the von Neumann bottleneck [Backus1978]: Neuromorphic


Computing Architectures, Models, and Applications 6 Surely there must be a less
primitive way of making big changes in the store than by pushing vast numbers of
words back and forth through the von Neumann bottleneck. Not only is this tube a
literal bottleneck for the data traffic of a problem, but, more importantly, it is an
intellectual bottleneck that has kept us tied to word- at-a-time thinking instead of
encouraging us to think in terms of the larger conceptual units of the task at hand.
Thus programming is basically planning and detailing the enormous traffic of words
through the von Neumann bottleneck, and much of that traffic concerns not significant
data itself but where to find it.

In the von Neumann architecture, memory and computation are separated by a bus,
and both the data for the program at hand as well as the program itself has to be
transferred from memory to a central processing unit (CPU). As CPUs have grown
faster, memory access and transfer speeds have not improved at the same scale
[Hennessy2011]. Moreover, even CPU performance increases are slowing, as Moore’s
law, which states that the number of transistors on a chip doubles roughly every 2
years, is beginning to slow (if not plateau). Though there is some argument as to
whether Moore’s law has actually come to an end, there is a consensus that Dennard
scaling, which says that as transistors get smaller that power density stays constant,
ended around 2004 [Shalf2015]. As a consequence, energy consumption on chips has
increased as we continue to add transistors. While we are simultaneously experiencing
issues associated with the von Neumann bottleneck, the computation-memory gap,
the plateau of Moore’s law,
Figure 1

and the end of Dennard scaling, we are gathering data in greater quantities than ever
before. Data comes in a variety of forms and is gathered in vast quantities through a
plethora of mechanisms, including sensors in real environments, by companies,
organizations, and governments and from scientific instruments or simulations. Much
of this data sits idle in storage, is summarized for a researcher using statistical
techniques, or is thrown away completely because current computing resources and
associated algorithms cannot handle the scale of data that is being gathered.
Moreover, beyond intelligent data analysis needs, as computing has developed, the
types of problems we as users want computers to solve have expanded. In particular,
we are expecting more and more intelligent behaviour from our systems.
These issues and others have spurred the development of non–von Neumann architectures.
In particular, the goal of pursuing new architectures is not to find a replacement for the
traditional von Neumann paradigm but to find architectures and devices that can complement
the existing paradigm and help to address some of its weaknesses.

Neuromorphic architectures are one of the proposed complement architectures for several
reasons:

1. Co-located memory and computation, as well as simple communication between


components, can provide a reduction in communication costs.

2. Neuromorphic architectures often result in lower power consumption (which can be a result
of analogy or mixed analogy - digital devices or due to the event-driven nature of the systems).

3. Common data analysis techniques, such as neural networks, have natural implementations
on neuromorphic devices and thus are applicable to many “big data” problems.

4. By building the architecture using brain-inspired components, there is potential that a type
of intelligent behaviour will emerge.

Overall, neuromorphic computing offers the potential for enormous increases in


computational efficiency as compared to existing architecture in domains like big data
analysis, sensory fusion and processing, real-world/real-time controls (e.g., robots), cyber
security, etc. Without neuromorphic computing as part of the future landscape of computing,
these applications will be very poorly served.
Current State of Neuromorphic
Computing Research

Because of the broad definition of neuromorphic computing and since the community spans a

large number of fields (neuroscience, computer science, engineering, and materials science), it

can be difficult to capture a full picture of the current state of neuromorphic computing research.

The goals and motivations for pursuing neuromorphic computing research vary widely from

project to project, resulting in a very diverse set of work.

One view of neuromorphic systems is that they represent one pole of a spectrum of reproable

computing platforms (Figure 1). On one end of that spectrum is the synchronous von Neumann

architecture. The number of cores or computational units increases in moving across this

spectrum, as does the asynchrony of the system.

Figure 2
One group of neuromorphic computing research is motivated by computational neuroscience and

thus is interested in building hardware and software systems capable of completing large-scale,

high-accuracy simulations of biological neural systems in order to better understand the biological

brain and how these systems function.

Though the primary motivation for these types of projects is to perform large-scale computational

neuroscience, many of the projects are also being studied as computing platforms, such as

Spinnaker [Furber2013] and Brain Scales [Brüderle2011]. A second group of neuromorphic

computing research is motivated by accelerating existing deep learning networks and training and

thus is interested in building hardware that is customized specifically for certain types of neural

networks (e.g., convolutional neural networks) and certain types of training algorithms (e.g., back-

propagation). Deep learning achieves state-of-the-art results for a certain set of tasks (such as

image recognition and classification), but depending on the task, training on traditional CPUs can

take up to weeks and months. Most state-of-the-art results on deep learning have been obtained

by utilizing graphics processing units (GPUs) to perform the training process. Much of the deep

learning research in recent years has been motivated by commercial interests, and as such the

custom deep learning–based neuromorphic systems have been primarily created by industry

(e.g., Google’s Tensor Processing Unit [Jouppi2016] and the Nirvana Engine [Nervana2016]).

These systems fit the broad definition of neuromorphic computing in that they are neural-inspired

systems on non–von Neumann hardware. However, there are several characteristics of deep

learning–based systems that are undesirable in other neuromorphic systems, such as the reliance

on a very large number of labelled training examples.

The third and perhaps most common set of neuromorphic systems is motivated by developing

efficient neutrally inspired computational hardware systems, usually based on spiking and non-

spiking neural networks. These systems may include digital or analogy implementations of

neurons, synapses, and perhaps other biologically inspired components. Example systems in this

category include the True North system [Merolla2014], HRL’s Latigo chip [Supusepa], Neurogram

[Benjamin2014], and Darwin [Shen2016]. It is also worth noting that there are neuromorphic
implementations using off-the-shelf commodities, such as field programmable gate arrays

(FPGAs), which are useful as both prototypes systems and, because of their relative cost, have

real-world value as well. Neuromorphic Computing Architectures, Models, and Applications 9 One

of the most popular technologies associated with building neuromorphic systems is the

memristor (also known as ReRAM). There are two general types of memristors: non-volatile,

which is typically used to implement synapses, and locally active, which could be used to

represent a neuron or axon (Figure 2). Non-volatile memristors are also used to demonstrate

activation functions and other logical computations. Memristors used to implement synapses are

often used in a crossbar (Figure 3). The crossbar can operate in either a current mode or a voltage

mode, depending on the energy optimization constraints. The use of spiking neural networks as

a model in some neuromorphic systems allows these types of systems to have asynchronous,

event-driven computation, reducing energy overhead. Another use of these devices is to realize

a co-located dense memory for storing intermediate results within a network (such as weights).

Also, the use of technologies such as memristors have the potential to build large-scale systems

with relatively small footprints and low energy usage.

Figure 3
Open Issues

Neuromorphic computing includes researchers in fields such as neuroscience, computing,

computer and electrical engineering, device physics, and materials science. Each research area

has a set of open research questions that must be addressed in order for the field to move

forward. The focus of the workshop was to identify the major questions from a computing

perspective of neuromorphic computing or questions that can be addressed primarily by

computational scientists, computer scientists, and mathematicians and whose solutions can

benefit from the use of high performance computing (HPC) resources. Based on the submissions

to the workshop and the presentations and discussions during the workshop, six major questions

emerged, each of which is described in the following sections and framed in the bigger picture of

a neuromorphic architecture (as shown in Figure 5). It is important to note that none of these

questions can be answered by computing alone and will require close collaboration with other

disciplines across the field of neuromorphic computing.

What are the basic computational building blocks and general architectures of

neuromorphicsystems?
Figure -4

The basic computational building blocks of most neuromorphic systems are neurons and

synapses. Some neuromorphic systems go further and include notions of axons, dendrites, and

other neural structures, but in general, neurons and synapses are the key components of the

majority of neuromorphic systems. The information propagation usually is conducted through

spikes: whenever enough charge has flowed in at synapses, a neuron generates outgoing spikes,

which causes charge to be injected into the post-synaptic neuron. What neuron and synapse

models are appropriate? When defining a neuromorphic architecture or model, the associated

neuron and synapse models must be chosen from among a large variety of neuron and synapse

models. Example neuron models range from very simple (e.g., McCulloch Pitts [McCulloch1943])

to very complex (e.g., Hodgkin-Huxley [Hodgkin1952]).

In addition to complicated neuron models, there are a variety of synapse implementations,

ranging from very simple synapses that are only represented by weight values (such as those used

with McCulloch-Pitts neurons) to extremely complex, biologically realistic synapses that include

notions of learning or adaptation. Artificial synapses may or may not have a notion of delay,

depending on the implementation. Some synapse models represent synapse activity using a

conductance-based model [Vogelstein2007] rather than a weight value. Synaptic weights may be

adapted over the course of network simulation via learning mechanisms such as Hebbian-

learning-like rules (e.g., spike-timing-dependent plasticity (STDP) [Caporale2008] or long-term


and short-term potentiation and depression [Rabinovich2006]). In general, we do not believe that

detailed bio-mimicry is necessary or feasible because neurobiology is extremely complicated and

still not well understood.

What other biological components may be necessary for a working

neuromorphic system?

Beyond neurons and synapses, there are a variety of other biologically inspired mechanisms that

may be worth considering as computational primitives. Astrocytes are glial cells in biological

brains that act as regulatory systems; in particular, they can stimulate, calm, synchronize, and

repair neurons. Certainly we will want to consider what computational effects these regulatory-

type systems will have on neuromorphic models. Neurotransmitters and neuromodulator

systems also have a significant effect on the behaviour of biological brains. It is not clear how

these systems influence the capabilities of biological brains, such as learning, adaptation, whereas

hierarchical temporal memory (HTM) takes its inspiration from the organization of the neocortex

[Hawkins2016, WSP:Kudithipudi] (Figure 7). When considering the model selection, it may be

worthwhile to target a specific functionality or set of applications and take inspiration from a

particular part of the brain that performs that functionality well in determining the characteristics

of the model. and fault tolerance, but it is worthwhile to consider their inclusion in neuromorphic

models. In general, the goal should be to make minimalistic systems first and then to grow their

complexity as we understand how the systems operate together. If we start with the proposition

that it must be very, very complex to work, then we will surely fail. There are many different types

of neurons and synapses within biological neural systems, along with other biological components
such as glial cells; different areas of the brain have different neuron types, synapse types,

connectivity patterns, and supporting systems. Artificial neural networks have taken inspiration

from different areas of the brain for different types of neural networks. For example, the structure

of convolutional neural networks is inspired by the organization and layout of the visual cortex

[LeCun2015],

How do we build and/or integrate the necessary computing hardware?

The primary focus of this report has been on computing aspects of neuromorphic computing.

However, in order to have a truly successful neuromorphic computing program, it will be

necessary that we consider the hardware and device components as well. As compared to

conventional computing systems, neuromorphic computing systems and algorithms need higher

densities of typically lower precision memories operating at lower frequencies. Also,

multistate/analogy memories offer the potential to support learning and adaptation in an

efficient and natural manner. Without efficient hardware implementations that leverage new

materials and devices, the real growth of neuromorphic applications will be substantially

hindered. As noted throughout this report, significant effort will be needed on the part of the

computing community to work with researchers in the areas neuromorphic materials, devices,

and circuitry. Neuromorphic systems will not and cannot be developed by a single field in

isolation. The decisions we make at the model and algorithm level should both inform and be

informed by what occurs at the materials, device, and circuitry levels. DOE has capabilities at all

levels of neuromorphic computing (models, algorithms, circuitry, devices, and materials) that can

and should be leveraged together.


Intermediate Steps

There are several intermediate steps that the community can take to begin to
address these challenges.
1. Large-Scale Simulators: The first short-term goal is the development of large-
scale simulations that are capable of supporting different neuromorphic
architectures at different levels of abstraction. High-level simulations will be
important for examining and testing different computational building blocks,
as well as connectivity patterns and structural programmability
requirements. High-level simulations can be used to develop and test
theoretical results, as well as to help develop algorithms for training and
learning. Insights gained from these simulations can also inform device
design. Lower-level simulations are also likely to be important. Large-scale
simulations at the device level, circuit level, or even materials level can be
used in concert with Neuromorphic Computing Architectures, Models, and
Applications 34 small-scale prototypes of devices to provide insight as to the
capabilities and characteristics of the system, including energy efficiency
estimates.

2. Algorithms: A second intermediate goal is to delve deeply into the


development of algorithms specifically for neuromorphic system. In doing so,
we need to understand the fundamental characteristics of each system, as
well as how they operate. As a first step, we may continue to adapt existing
training/learning methods so that they are appropriate for neuromorphic
computers. However, we cannot continue to rely on off-line and off-chip
training and learning; we must be willing to allow for suboptimal
performance of the resulting algorithms, at least in the initial development
stages. It is also worthwhile to begin to form some theoretical foundations
of what it means for a system to be intelligent. In doing so, we may look to
neuroscience, nonlinear dynamics, and theories of the mind.

3. Hardware Development: A third intermediate goal is to develop


neuromorphic hardware, including both computation building blocks and
system architecture. It requires an integrated effort with a good
understanding of not only the material science, device process, and circuit
design but also the neuroscience and software engineering. We expect
complete and functional computing systems by leveraging new materials,
devices, and novel circuitry and architecture. The investigation and
prototype of system integration may be necessary. It is also worthwhile to
enable a fabrication foundry available to a wide user base of researchers and
developers of neuromorphic processors by leveraging and sharing common
infrastructure and interfaces.

4. Supporting Software: A fourth intermediate goal is the development of


supporting software for both simulation software and neuromorphic
hardware. It is of vital importance that this software is developed in
conjunction with the simulation software, rather than waiting for the devices
to be made available. Tools such as visualizations and user interfaces can be
used to debug the software and hardware itself throughout development
and ease the interaction for algorithm developers.

5. Benchmark Suite: A fifth intermediate goal is to define a set of benchmarks


for neuromorphic systems in order to test the relative strengths and weaknesses
of various neuromorphic computing approaches. It is of vital importance that
this set of benchmarks accurately reflects the range of capabilities and
characteristics of neuromorphic systems (Figure 15). Benchmarks for neural
network systems, such as classification, control, and anomaly detection, should
be included in the set of benchmarks. At least one neuroscience-specific
benchmark (such as generating particular spiking patterns) should also be
included, to encompass neuromorphic systems built specifically to accurately
simulate biological neural systems. It is also likely worthwhile to include at least
one non-neural network and non-neural-inspired benchmark, such as running a
traditional graph algorithm. One goal in defining the set of benchmarks is that
they should be diverse enough that they will not directly drive, and thus possibly
restrict, the creativity in the development of neuromorphic systems. As part of
defining a set of benchmarks, relevant data sets and simulation codes should be
made available for the community, so that fair comparisons may be made
between neuromorphic systems. As such, building the data sets and simulation
software will require effort.

5. Metrics: A sixth intermediate goal is the development and selection of


appropriate metrics for measuring performance and comparing different
neuromorphic systems. This step will need to be accomplished
simultaneously with benchmark selection. Neuromorphic systems are almost
always measured in terms of application-specific metrics (such as accuracy
for classification tasks). Though application-specific metrics are important
(and should be defined alongside the set of benchmarks), it is equally
important that other metrics be defined as well. A power efficiency metric
will also be important to define. Most reported results for neuromorphic
computing devices include some power metric, but it is also important to
include energy consumption and power efficiency of supporting software
running on traditional architectures as well as communication costs when
considering the true power cost of a neuromorphic system. Another major
metric to consider is the computation metric similar to FLOPS for traditional
architectures; this will likely be the hardest metric to define because different
models will have significantly different operating characteristics. It may also
be worthwhile to consider other metrics, such as size of device (footprint),
scalability, and programmability/flexibility. By defining a common set of
metrics and benchmarks, we may begin to quantify which neuromorphic
models and devices are best for a particular type of application.

For each of these intermediate steps, it is important that the computing


community develop strong interactions with other key fields in the
neuromorphic computing: the neuroscience community, the
hardware/device development community, and the materials.

Figure -5
FUTURE SCOPE

Four major areas within neuromorphic computing emerged as the most


important to focus on for long-term success.

1. Learning: The development of general purpose learning and training


methods for neuromorphic systems is and will continue to be a major goal
for neuromorphic computing, and it will likely require long-term investment
in order to produce successful systems. It will require a coordinated
Neuromorphic Computing Architectures, Models, and Applications 36 effort
between computational neuroscientists, researchers with specialties in
learning and intelligence, machine learning and artificial intelligence
researchers, and algorithm designers in order to produce them.

2. Theory: The development of computational theory and theories of


intelligence within neuromorphic computing will also be key in neuromorphic
systems being accepted by the computing community at large. The development
of theoretical results associated with abstract models and learning or
intelligence will also drive algorithm development, device design, and materials
research of the future. With a handle on the theoretical capabilities of the
system, we may inspire the development and innovation of new neuromorphic
systems for the next several decades

3. Large-scale coordinated effort: One of the major conclusions reached during


the workshop was that neuromorphic computing research in the United States
as a whole and in DOE in particular would benefit from a large-scale, coordinated
program across all of the associated disciplines. Such a neuromorphic computing
program for the United States was proposed by Stan Williams during our
workshop (Figure 16). The first four proposed elements of this large-scale
neuromorphic computing program fall neatly in line with the major research
questions proposed in this report, as well as the proposed short-term and long-
term goals we have identified for neuromorphic computing moving forward. It
is vital that we consider the broader picture of the community when addressing
these goals.

4. Applications: As data sets have continued to grow in recent years, the long-
term prognosis for data analysis across a broad range of applications appears
problematic. In particular, scientific data sets are typically very large with
numerous, dense, and highly interrelated measurements. One primary
characteristic of many of these scientific data sets is spatial and/or temporal
locality in the observation space.

In other words, it is rare that any two observations will be independent in both
time and space, and proximity of any two measurements in time and/or space
is highly relevant to an analysis of the data. Furthermore, the scale of the data
is typically so immense that many separate measurements are almost inevitably
represented as an average, sacrificing detail for tractability. A neuromorphic
computing paradigm may be ideally suited to address these challenges through
the ability to leverage both spatial and temporal locality.
Figure-6.
CONCLUSION

Overall, there are several major conclusions of our workshop:

1. Importance of simulators, benchmarks, and metrics: There are many different


approaches to neuromorphic models, architectures, devices, and algorithms. We
need a way to study (via simulation) and evaluate (via metrics and benchmark
definitions) these systems in a meaningful way.

2. Paradigm shift for programming: A fundamental shift in the way we think


about programming and computing algorithm development is going to be
required. We need to develop theories of learning and intelligence and
understand how they will be applied to neuromorphic computing systems.

3. Focus on innovation, not applications: Applications are important, but


particular applications should not be the driving focus in the development of a
neuromorphic computer. Though benchmarks are important for evaluation, they
should not limit or restrict development.

4. Move away from brain emulation: The goal of a neuromorphic computer


should not be to emulate the brain. We should instead take inspiration from
biology but not limit ourselves to particular models or algorithms, just because
they work differently from their corresponding biological systems, nor should we
include mechanisms just because they are present in biological systems.

5. Large-scale coordination: The community would greatly benefit from a large-


scale coordinated effort, as well as a neuromorphic “hub” through which we may
openly share successes and failures, supporting software, data sets and
application simulations, and hardware designs.
DOE Office of Science and ASCR are well positioned to address the major
challenges in neuromorphic computing for the following three important
reasons.

1. World-class scientific user facilities: We have access to and experience using


world-class scientific user facilities, such as HPC systems, which will be invaluable
in studying large-scale simulations of neuromorphic systems, and materials
science user facilities, which will be invaluable in providing insight into the
properties of materials to build neuromorphic systems.

2. World-class researchers: We have researchers in each of the major focus


areas of the community, including biology, computing, devices, and materials,
who have experience in building cross-disciplinary collaborations.

3. World-class science problems: We have world-class science problems for


which neuromorphic computing may apply. Without DOE involvement, it is likely
that neuromorphic computing will be limited to commercial applications.

It is clear that neuromorphic computing will play a critical role in the landscape
of future computing. The feasibility of such systems has been demonstrated, but
there are still major research questions that need to be resolved in the
development and application of these systems. DOE can fill an important
national role in enabling solutions that address these important basic research
and development challenges.
REFERENCES

• [Backus1978] Backus, John. "Can programming be liberated from the von


Neumann style?: a functional style and its algebra of programs."
Communications of the ACM21.8 (1978): 613–641.
• [Benjamin2014] Benjamin, Ben Varkey, et al. "Neurogram: A mixed-
analogy-digital multichip system for large-scale neural simulations."
Proceedings of the IEEE 102.5 (2014): 699–716.
• [Brüderle2011] Broderie, Daniel, et al. "A comprehensive workflow for
general-purpose neural modelling with highly configurable neuromorphic
hardware systems." Biological cybernetics 104.4-5 (2011): 263– 296.
• [Caporale2008] Caporale, N. and Y. Dan. "Spike timing-dependent
plasticity: a Hebbian learning rule." Annu. Rev. Neurosis. 31 (2008): 25–
46.
• [Chen2012] Tianshi Chen, Yanji Chen, Marc Duran ton, Qi Guo, Atif
Hashmi, Mikko Lipski, Andrew Nere, Shi Qiu, Michele Sebag, Olivier
Temam. "Benchman: On the Broad Potential Application Scope of
Hardware Neural Network Accelerators," Proceedings of the IEEE
International Symposium on Workload Characterization, November 2012.
• [Chua1971] Chua, Leon. "Memristor-the missing circuit element." IEEE
Transactions on circuit theory 18.5 (1971): 507–519.
• [Davison2009] Davison, Andrew, et al. "Pynn: a common interface for
neuronal network simulators" (2009).
• [Furber2013] Furber, Steve B., David R. Lester, Luis A. Plana, Jim D. Garside,
Eustace Pancras, Steve Temple, and Andrew D. Brown. "Overview of the
Spinnaker system architecture." IEEE Transactions on Computers 62.12
(2013): 2454–2467.
• [Gerstner2002] Gerstner, Wulfram, and Werner M. Kistler. Spiking neuron
models: Single neurons, populations, plasticity. Cambridge University
Press, 2002.
• [Hasan2016] Hasan, R., T. M. Taha, C. Alopecic, and D. Mountain, “High
Throughput Neural Network based Embedded Streaming Multicore
Processors,” Proceedings of the IEEE International Conference on
Rebooting Computing, October 2016.
• [Hawkins2016] Hawkins, J. Biological and Machine Intelligence. Release
0.4, 2016. Accessed at http://numenta.com/biological-and-machine-
intelligence/. [Hennessy2011] Hennessy, John L., and David A. Patterson.
Computer architecture: a quantitative approach. Elsevier, 2011.
• [Hodgkin1952] Hodgkin, A.L. and Huxley, A.F. "A quantitative description
of membrane current and its application to conduction and excitation in
nerve." The Journal of physiology 117.4 (1952):
• 500. [Izhikevich2003] Ishikari, E.M. "Simple model of spiking neurons."
IEEE Transactions on neural networks 14.6 (2003): 1569–1572.
• [Jouppi2016] Jouppi, Norm. Google supercharges machine learning tasks
with tph custom chip, May 2016. URL:
https://cloudplatform.googleblog.com/2016/05/Google- supercharges-
machine-learningtasks-with-custom-chip.html.
• [LeCun2015] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep
learning." Nature521.7553 (2015): 436–444

You might also like