Development and Applications of Biomimetic Neuronal Networks Toward Brainmorphic Artificial Intelligence
Development and Applications of Biomimetic Neuronal Networks Toward Brainmorphic Artificial Intelligence
Development and Applications of Biomimetic Neuronal Networks Toward Brainmorphic Artificial Intelligence
Abstract—This brief presents the brainmorphic artificial 2) designing new low-power neuromimetic systems for AI
intelligence (BMAI) project. The main goal is to generate a novel applications.
type of neuromorphic computing systems including novel algo- The biomimetic neural network is a neuromorphic system
rithms and devices, which can achieve very high performance of with a most detailed level of analogy to the nervous system. It
artificial intelligence (AI) information processing with low power
consumption and which can be very close to biological systems. is a network of silicon neurons connected via silicon synapses
To reach this goal, we developed biomimetic neural networks. and plasticity rules. It can be silicon based [1] or microfluidic
Two systems have been designed, one analog chip and one digi- based [2]. To present the different hardware implementations
tal system into FPGA. We present in this brief two applications of of these systems [3], we rely on the review of neuromorphic
these biomimetic neural networks, one in AI with a pattern recog- architectures published in 2010, [4] and we enrich it with other
nition algorithm and one in neuroscience for understanding the articles. It will allow us to show the growing interest in neu-
hearing system of Drosophila for designing a new asynchronous romorphic systems these past/latest years, and their utilities
low-power sensor system in the future.
and their potential. Indeed, their applications extend to high-
Index Terms—Artificial intelligence, biomimetic neural energy physics, image processing (object / image / pattern
network, neuromorphic engineering, silicon neuron. recognition, image segmentation, video processing), robotics,
acoustic or olfactory recognition and understanding of the ner-
vous system. Hardware implementations are divided into two
I. I NTRODUCTION major categories: analog implementation (based on dedicated
EUROMORPHIC systems are designed by mimicking chips) and digital implementation (based on FPGA, micro-
N or being inspired by the nervous system, which real-
izes robust, autonomous, and power-efficient information pro-
processors, microcontrollers or neurochips). We focus here on
biomimetic and non-bio-inspired systems. The very first plat-
cessing by highly parallel architecture. The purpose of the forms appeared around twenty years ago [5]–[7]. In the case
Brainmorphic artificial intelligence project (BMAI), is to gen- of analog implementations, some systems implement multi-
erate a novel type of neuromorphic computing systems includ- compartmental models [8], conductance models [9] or with
ing novel algorithms and devices, which can achieve very threshold models [10], [11]. These platforms start from an
high performance of Artificial Intelligence (AI) information analog computing core, usually an ASIC, which describes the
processing with low power consumption and close resem- electrophysiological activity of the neuron. The architecture of
blance to biological systems. Using biomimetic neural network the different platforms results from a compromise between the
systems, we construct a novel fundamental technology of computational cost and the complexity of the model (directly
the information processing by developing the Brainmorphic correlated to biological plausibility). The integration of plas-
AI system which can execute fast autonomous informa- ticity and synapses is usually done by a digital map that makes
tion processing with low energy. Two main objectives are the link between different analog chips. As for the digital
raised: 1) understanding and simulating biological intelligence; implementation, biomimetic neural network implementations
on FPGAs are generally used for pattern recognition, image
Manuscript received February 26, 2018; revised April 5, 2018; accepted
segmentation, video image processing and video analysis [12].
April 5, 2018. Date of publication April 9, 2018; date of current version The last point of the discussion concerns the size of artifi-
May 1, 2018. This work was supported in part by NEC Corporation, in part cial networks. Indeed, neurons are organized in networks of
by the JST PRESTO program, WPI, MEXT Japan, in part by the JST CREST various sizes. Biological plausibility is a constraint on the max-
program, and in part by JSPS KAKENHI under Grant 15H05707 and Grant
17K19450. This brief was recommended by Associate Editor J. M. de la Rosa. imum possible size of the network. To obtain large networks
(Corresponding author: Timothée Levi.) of neurons (population of 1000 neurons or more), it is
T. Levi, T. Nanami, A. Tange, and T. Kohno are with the necessary to:
Institute of Industrial Science, University of Tokyo, Tokyo 153-8505, • implement a simple neural model and a simple synapse
Japan (e-mail: [email protected]; [email protected]; tange@
sat.t.u-tokyo.ac.jp; [email protected]). model. Reference [13] presents an implementation on an ASIC
K. Aihara is with the Institute of Industrial Science, University of Tokyo, containing one million LIF (Leak Integrate and Fire) neurons
Tokyo 153-8505, Japan, and also with the WPI-IRCN, UTIAS, University of and 256 million synapses.
Tokyo, Tokyo 153-8505, Japan (e-mail: [email protected]).
Color versions of one or more of the figures in this paper are available
• perform accelerated neural network calculations such as
online at http://ieeexplore.ieee.org. the SpiNNaker [14] platform with a multiprocessor architec-
Digital Object Identifier 10.1109/TCSII.2018.2824827 ture. With over one million cores, and one thousand simulated
1549-7747 c 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
578 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 65, NO. 5, MAY 2018
Fig. 2. Experimental result of the analog chip for the elliptic bursting mode.
The yellow curve is the membrane voltage v and the green curve represents
signal q. Time scale is 200ms by division, and voltage scale is 50mV by
division.
Fig. 3. Experimental results of A) RS neuron and B) LTS neuron. Scope III. A PPLICATIONS
pictures after 12-bit DAC module using SPI communication. Time scale is
200ms by division for A) and 10ms for B), and voltage scale is 20mV by Here, we present two applications for our biomimetic silicon
division for both pictures. neural network. One for AI and one for neuroscience. Both
will be tested first on digital circuits and then implemented in
analog chips.
our silicon neuron can realize Class II and we could realize the
elliptic bursting experimentally as shown in Fig. 2. 16 param- A. Pattern Recognition
eter voltages have to be tuned for intended neuronal activity. Spike-Timing-Dependent Plasticity (STDP) is a biological
Our analog silicon neuron circuit consumes only less than process that adjusts synaptic efficacy between neurons in the
5 nW. For every neuron mode with typical firing rate (less brain. The process adjusts synaptic efficacy based on the rel-
than 100Hz), the power consumption did not exceed 5nW. It ative timing of a particular neuron’s output and input action
is in the same range as [11], [35], and [36] but can repro- potentials or spikes. Recently, it has been shown how STDP
duce more classes of neuronal activities. This ASIC could could play a key role in detecting repeating patterns and in
be used for neurobiohybrid experiments. As geographically generating selective response to them [38]. The concept of
distributed artificial biomimetic neural networks and living STDP has been shown to be an important learning algorithm
neurons already exist [23], using this ASIC in combination for forward-connected artificial neural networks in pattern
of other biomimetic platforms with different neuron mod- recognition.
els can demonstrate the importance of the spike variation in Particularly in the work presented by Masquelier et al. [38],
neuromorphic device. it has been shown that repetition of arbitrary spatiotemporal
spike patterns hidden in spike trains can be robustly detected
and learned by multiple neurons equipped with spike-timing-
C. Digital Silicon Neuron on FPGA dependent plasticity (STDP) listening to the incoming spike
To verify the algorithms before analog implementation, we trains (Fig. 4). The neurons become selective to successive
designed a digital spiking silicon neuron (DSSN) model that coincidences of the patterns.
can be implemented in FPGA. The DSSN model is a qualita- As used in the work of Masquelier et al. [38], we generated
tive neuronal model designed for efficient implementation in spikes independently using a Poisson process with a variable
a digital arithmetic circuit and can simulate several classes of instantaneous firing rate that varies randomly between 0 and
neuronal activities: RS, FS, Low Threshold Spiking (LTS), 90 Hz. The maximal rate change was chosen so that the neuron
Intrinsically Bursting (IB), EB and SWB classes [37]. The could go from 0 to 90 Hz in 50 ms. Finally, a part of the spike
DSSN model was implemented in a Kintex-7 FPGA with trains, defined as the ‘pattern’ to be repeated, was replaced for
few resources (386 LUT and 121 FF for one FS neuron, half input lines into sections of 50ms. We randomly picked
and 401 LUT and 123 FF for RS neuron, no DSP are used). one of these sections and copy the corresponding spikes. As
Fig. 3 shows hardware implementation of FS and RS neuron. shown in Fig. 5, four different hidden patterns were gener-
Our digital system can simulate different classes of neuron ated (blue, pink, cyan and yellow) with repetition at random
in real-time. The computational cost of the DSSN model is intervals within stochastic Poissonian activity.
larger than that of the recent sophisticated Integrate-and-Fire- The criteria for recognition are the hit rate above 90% and
based models such as IZH and exponential I&F models, but the false alarm rate below 1 Hz. Using 9 DSSN neurons (LTS
smaller than ionic-conductance models. This model is intended neuron model), 4 patterns and 512 afferent neurons, the suc-
to provide a good trade-off that satisfies the demand for large- cess rate reaches 88%. Using different family of neurons (FS,
scale neuronal network simulation with biomimetic models. RS or LTS) resulted difference in hit rate (69% for RS for
580 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 65, NO. 5, MAY 2018
IV. C ONCLUSION [17] S. Vassanelli and M. Mahmud, “Trends and challenges in neuro-
engineering: Toward ‘intelligent’ neuroprostheses through brain-‘brain
The main features of this brief can be summarized in five inspired systems’ communication,” Front. Neurosci., vol. 10, p. 438,
points. 1) The creation of a new neuronal model is a good Sep. 2016.
trade-off between biological plausibility and low resource [18] S. Potter, A. El Hady, and E. Fetz, “Closed-loop neuroscience and neuro-
implementation. Compared to other I&F-based models, this engineering,” Front. Neural Circuits, vol. 8, pp. 2013–2015, Sep. 2014.
[19] T. Levi, P. Bonifazi, P. Massobrio, and M. Chiappalone, “Closed-loop
model can replicate several classes of neuronal activities and systems for next-generation neuroprostheses,” Front. Neurosci., vol. 12,
also action potential shape which is important in brain sig- p. 26, Feb. 2018.
nal processing. 2) Then, we designed silicon neurons in an [20] F. D. Broccard, S. Joshi, J. Wang, and G. Cauwenberghs, “Neuromorphic
neural interfaces: From neurophysiological inspiration to biohybrid cou-
analog chip in TSMC 0.25 µm technology. This ASIC pro- pling with nervous systems,” J. Neural Eng., vol. 14, no. 4, 2017,
vides very low power consumption (5 nW) and can replicate Art. no. 041002.
many neuronal activities such as elliptic bursting mode. 3) We [21] P. Bonifazi et al., “In vitro large-scale experimental and theoretical stud-
also implemented silicon neurons in FPGA. We validated ies for the realization of bi-directional brain-prostheses,” Front. Neural
Circuits, vol. 7, p. 40, Mar. 2013.
this implementation by mimicking cortical neurons. Then we [22] S. Joucla et al., “Generation of locomotor-like activity in the iso-
used these two systems for different applications. 4) First in lated rat spinal cord using intraspinal electrical microstimulation driven
AI with a new algorithm of pattern recognition using just by a digital neuromorphic CPG,” Front. Neurosci., vol. 10, p. 67,
9 DSSN neurons with STDP and lateral inhibitory connec- Feb. 2016.
[23] A. Chiolerio, M. Chiappalone, P. Ariano, and S. Bocchini, “Coupling
tions for detecting four different patterns. This system can resistive switching devices with neurons: State of the art and perspec-
be used to characterize biological culture on MEA or in asyn- tives,” Front. Neurosci., vol. 11, p. 70, Feb. 2017.
chronous sensors. 5) Finally, we developed models for hearing [24] A. Serb et al., “A geographically distributed bio-hybrid neural network
system of Drosophila, to further understand the biological with memristive plasticity,” arXiv:1709.04179, 2017.
[25] M. Pospischil et al., “Minimal Hodgkin–Huxley type models for dif-
intelligence and to design new generation of low-power and ferent classes of cortical and thalamic neurons,” Biol. Cybern., vol. 99,
low-resource sensors. All these systems are included in the nos. 4–5, pp. 427–441, 2008.
BMAI project in collaboration with The University of Tokyo [26] D. Debanne, A. Bialowas, and S. Rama, “What are the mechanisms
for analogue and digital signalling in the brain?” Nat. Rev. Neurosci.,
and NEC Corporation. vol. 14, pp. 63–69, Nov. 2013.
[27] R. Brette, “Philosophy of the spike: Rate-based vs. spike-based theories
R EFERENCES of the brain,” Front. Syst. Neurosci., vol. 9, p. 151, Nov. 2015.
[1] G. Indiveri et al., “Neuromorphic silicon neuron circuits,” Front. [28] E. M. Izhikevich, “Simple model of spiking neurons,” IEEE Trans.
Neurosci., vol. 5, p. 73, May 2011. Neural Netw., vol. 14, no. 6, pp. 1569–1572, Nov. 2003.
[2] T. Levi and T. Fujii, “Microfluidic neurons: A new way in neuromorphic [29] W. Gerstner and W. Kistler, Spiking Neuron Models: Single Neurons,
engineering?” Micromachines, vol. 7, p. 146, Aug. 2016. Populations, Plasticity. Cambridge, U.K.: Cambridge Univ. Press, 2002.
[3] T. Levi et al., “Neuromimetic integrated circuits,” in VLSI Circuits for [30] H. Alle and J. R. P. Geiger, “Combined analog and action poten-
Biomedical Applications. London, U.K.: Artech House, 2008, ch. 12, tial coding in hippocampal mossy fibers,” Science, vol. 311, no. 5765,
pp. 241–264. pp. 1290–1293, Mar. 2006.
[4] J. Misra and I. Saha, “Artificial neural networks in hardware: A sur- [31] T. Kohno and K. Aihara, “Mathematical-model-based design method
vey of two decades of progress,” Neurocomputing, vol. 74, nos. 1–3, of silicon burst neurons,” Neurocomputing, vol. 71, nos. 7–9,
pp. 239–255, 2010. pp. 1619–1628, Mar. 2008.
[5] M. Mahowald and R. Douglas, “A silicon neuron,” Nature, vol. 354, [32] T. Kohno, M. Sekikawa, J. Li, T. Nanami, and K. Aihara, “Qualitative-
pp. 515–518, Dec. 1991. modeling-based silicon neurons and their networks,” Front. Neurosci.,
[6] R. Jung, E. J. Brauer, and J. J. Abbas, “Real-time interaction between a vol. 10, no. 273, pp. 1–16, Jun. 2016.
neuromorphic electronic circuit and the spinal cord,” IEEE Trans. Neural [33] T. Kohno, M. Sekikawa, and K. Aihara, “A configurable qualitative-
Syst. Rehabil. Eng., vol. 9, no. 3, pp. 319–326, Sep. 2001. modeling-based silicon neuron circuit,” Nonlin. Theory Appl., vol. 8,
[7] G. Le Masson, S. R.-L. Masson, D. Debay, and T. Bal, “Feedback inhibi- no. 1, pp. 25–37, Jan. 2017.
tion controls spike transfer in hybrid thalamic circuits,” Nature, vol. 417, [34] A. L. Hodgkin and A. F. Huxley, “A quantitative description of mem-
pp. 854–858, Jun. 2002. brane current and its applications to conduction and excitation in nerve,”
[8] P. Hasler, S. Kozoil, E. Farquhar, and A. Basu, “Transistor channel den- J. Physiol., vol. 117, no. 4, pp. 500–544, 1952.
drites implementing HMM classifiers,” in Proc. Int. Symp. Circuits Syst., [35] S. Brink, S. Nease, and P. Hasler, “Computing with networks of
New Orleans, LA, USA, 2007, pp. 3359–3362. spiking neurons on a biophysically motivated floating-gate based
[9] F. Grassia et al., “Tunable neuromimetic integrated system for emulating neuromorphic integrated circuit,” Neural Netw., vol. 45, pp. 39–49,
cortical neuron models,” Front. Neurosci., vol. 5, p. 134, Dec. 2011. Sep. 2013.
[10] J. Schemmel, L. Kriener, P. Muller, and K. Meier, “An accelerated analog [36] V. Rangan et al., “A subthreshold aVLSI implementation of the
neuromorphic hardware system emulating NMDA- and calcium-based Izhikevich simple neuron model,” in Proc. IEEE Eng. Med. Biol. Conf.,
non-linear dendrites,” in Proc. Int. Joint Conf. Neural Netw., Anchorage, Buenos Aires, Argentina, Aug. 2010, pp. 4164–4167.
AK, USA, May 2017, pp. 2217–2226. [37] T. Nanami and T. Kohno, “Simple cortical and thalamic neuron models
[11] N. Qiao et al., “A reconfigurable on-line learning spiking neuromorphic for digital arithmetic circuit implementation,” Front. Neurosci., vol. 10,
processor comprising 256 neurons and 128K synapses,” Front. Neurosci., no. 181, pp. 1–12, May 2016.
vol. 9, p. 141, Apr. 2015. [38] T. Masquelier, R. Guyonneau, and S. J. Thorpe, “Competitive STDP-
[12] R. Wang et al., “An FPGA implementation of a polychronous spiking based spike pattern learning,” Neural Comput., vol. 21, no. 5,
neural network with delay adaptation,” Front. Neurosci., vol. 7, p. 14, pp. 1259–1276, 2009.
Feb. 2013. [39] T. Levi et al., “Biomimetic neural network for modifying biological
[13] P. Merolla et al., “A million spiking-neuron integrated circuit with a scal- dynamics during hybrid experiments,” J. Artif. Life Robot., vol. 22, no. 3,
able communication network and interface,” Science, vol. 345, no. 6197, pp. 398–403, May 2017.
pp. 668–673, 2014. [40] T. Tully and W. G. Quinn, “Classical conditioning and retention in
[14] S. B. Furber et al., “Overview of the SpiNNaker system architecture,” normal and mutant Drosophila melanogaster,” J. Compar. Physiol. A,
IEEE Trans. Comput., vol. 62, no. 12, pp. 2454–2467, Dec. 2013. vol. 157, no. 2, pp. 263–277, Sep. 1985.
[15] A. D. Rast et al., “A location-independent direct link neuromorphic [41] A. W. Ewing and H. C. Bennet-Clark, “The courtship songs of
interface,” in Proc. Int. Joint Conf. Neural Netw., Dallas, TX, USA, Drosophila,” Behaviour, vol. 31, nos. 3–4, pp. 288–301, 1968.
2013, pp. 1–8. [42] A. Kamikouchi et al., “The neural basis of Drosophila gravity-sensing
[16] M. Ambroise, T. Levi, S. Joucla, B. Yvert, and S. Saïghi, “Real- and hearing,” Nature, vol. 458, pp. 165–171, Mar. 2009.
time biomimetic central pattern generators in an FPGA for hybrid [43] R. Fettiplace and P. A. Fuchs, “Mechanisms of hair cell tuning,” Annu.
experiments,” Front. Neurosci., vol. 7, p. 215, Nov. 2013. Rev. Physiol., vol. 61, pp. 809–834, Mar. 1999.