Spiking neural network
This article needs additional citations for verification. (December 2018) (Learn how and when to remove this template message)
Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.
- Modern artificial neural networks are usually fully connected, receiving continuous values and outputting continuous values. Although these networks have allowed us to achieve breakthroughs in many fields, they are biologically inaccurate and do not actually mimic the operation mechanism of neurons in the brain of living things.
- The first scientific model of a spiking neuron was proposed by Alan Lloyd Hodgkin and Andrew Huxley in 1952. This model describes how action potentials are initiated and propagated. Spikes, however, are not generally transmitted directly between neurons. Communication requires the exchange of chemical substances in the synaptic gap, called neurotransmitters. The complexity and variability of biological models have resulted in various neuron models, such as the integrate-and-fire (1907?), FitzHugh–Nagumo model (1961–1962) and Hindmarsh–Rose model (1984).
- From the information theory point of view, the problem is to propose a model that explains how information is encoded and decoded by a series of trains of pulses, i.e. action potentials. Thus, one of the fundamental questions of neuroscience is to determine if neurons communicate by a rate or temporal code. Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.
- A spiking neural network, which simulates neurons more closely to the actual situation, also takes into account the influence of time information. The idea is that neurons in a dynamic neural network are not activated in every iteration of propagation (as is the case in a typical multilayer perceptron network), but only when its membrane potential reaches a certain value. When a neuron is activated, it produces a signal that is passed on to other neurons, raising or lowering their membrane potential.
- In a spiking neural network, the current level of activation of a neuron (modeled as a differential equation of some kind) is generally considered to be the current state, and an input pulse causes the current value to rise for a period of time and then gradually decline. A number of encoding schemes have emerged to interpret these output pulse sequences as an actual number, taking into account both pulse frequency and pulse interval time. With the research of neuroscience, the neural network model based on pulse generation time can be established accurately. Spike coding is adopted in this new neural network. By obtaining the exact time of pulse occurrence, this new neural network can obtain more information and stronger computing power.
- It's important to note that Pcnn-pulse Coupled Neural Network is often confused with snn-spiking Neuron Networks. Pulse coupled neural network (PCNN) can be seen as a kind of spiking neural network (SNN), while spiking neural network (SNN) is a broader classification. It's spike coding.
- At first glance, the SNN approach looks like a step backward. We move from continuous output to binary output, and these pulse trainings are not very interpretable. But pulse training increases our ability to process spatiotemporal data (or real-world sensory data). Space refers to the fact that neurons are only connected to nearby neurons so that they can process input blocks separately (similar to CNN using filters). Time refers to the fact that pulse training occurs over time so that the information we lost in binary coding can be retrieved from the pulse time information. This allows us to process time data naturally without the additional complexity of a recurrent neural network (RNN). It turns out that impulse neurons are more powerful computational units than traditional artificial neurons.
- Since SNN is theoretically more powerful than second-generation networks, it's natural to wonder why they're not widely used. The main problem is SNN training. Although we have unsupervised biological learning methods, such as Hebbian learning and STDP, there is no effective supervised training method suitable for SNN that can provide better performance than the second-generation network. Since pulse training is not differentiable, we cannot use backpropagation based training methods like gradient descent. Therefore, in order to correctly use SNN to solve real-world tasks, we need to develop an efficient supervised learning method. This is a difficult task because it involves, given the biological realism of these networks, determining how the human brain learns.
- This kind of neural network can in principle be used for information processing applications the same way as traditional artificial neural networks. In addition, spiking neural networks can model the central nervous system of a virtual insect for seeking food without the prior knowledge of the environment. However, due to their more realistic properties, they can also be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, the electrophysiological recordings of this circuit can be compared to the output of the corresponding spiking artificial neural network simulated on computer, determining the plausibility of the starting hypothesis.
- In practice, there is a major difference between the theoretical power of spiking neural networks and what has been demonstrated. They have proved useful in neuroscience, but not (yet) in engineering. Some large scale neural network models have been designed that take advantage of the pulse coding found in spiking neural networks, these networks mostly rely on the principles of reservoir computing. However, the real world application of large scale spiking neural networks has been limited because the increased computational costs associated with simulating realistic neural models have not been justified by commensurate benefits in computational power. As a result, there has been little application of large scale spiking neural networks to solve computational tasks of the order and complexity that are commonly addressed using rate coded (second generation) neural networks. In addition it can be difficult to adapt second generation neural network models into real time, spiking neural networks (especially if these network algorithms are defined in discrete time). It is relatively easy to construct a spiking neural network model and observe its dynamics. It is much harder to develop a model with stable behavior that computes a specific function.
The emerging picture (2019) is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatiotemporal data.
There is a diverse range of application software to simulate spiking neural networks. This software can be classified according to the use of the simulation:
- Software used primarily to simulate spiking neural networksGENESIS (the GEneral NEural SImulation System) developed in James Bower's laboratory at Caltech; NEURON, mainly developed by Michael Hines, John W. Moore and Ted Carnevale in Yale University and Duke University; Brian, developed by Romain Brette and Dan Goodman at the École Normale Supérieure; and NEST developed by the NEST Initiative. This type of application software usually supports the simulation of complex neural models with a high level of detail and accuracy. However large networks usually require very time-consuming simulations.
- Software which addresses information processing tasks to solve problems. Commercialized processing software such as BrainChip Studio is in this group. It is based on application software developed by Delorme and Thorpe in collaboration between Centre de Recherche Cerveau et Cognition and BrainChip (formally SpikeNet technology). The supervised learning software has an ability to be trained instantaneously, high accuracy, very low power and has considerable advantages over convolutional neural networks where massive datasets are not available. It's currently in commercial use in both civil and commercial surveillance applications in Europe and North America.
- Software which provides capabilities to support the simulation of relatively complex neural models efficiently so that it can also be convenient for information processing tasks. This software can exploit biological neuron characteristics to perform computation functions and at the same time allows the study of the functionality of these neural characteristics. In this software group, we can find EDLUT which has been developed in the University of Granada. This application software must be efficient enough to run fast simulations, sometimes even in real time, and at the same time it must support neural models which are detailed and biologically plausible.
- In the brain, learning is achieved through the ability of synapses to reconfigure the strength by which they connect neurons (synaptic plasticity). In promising solid-state synapses called memristors, conductance can be finely tuned by voltage pulses and set to evolve according to a biological learning rule called spike-timing-dependent plasticity (STDP). will comprise billions of such nanosynapses, which require a clear understanding of the physical mechanisms responsible for plasticity. Here we report on synapses based on ferroelectric tunnel junctions and show that STDP can be harnessed from inhomogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, we demonstrate that conductance variations can be modelled by the nucleation-dominated reversal of domains. Based on this physical model, our simulations show that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning in spiking neural networks.
- Classification capabilities of spiking networks trained according to unsupervised learning methods have been tested on the common benchmark datasets, such as, Iris, Wisconsin Breast Cancer or Statlog Landsat dataset (Newman et al. 1998, Bohte et al. 2002a, Belatreche et al. 2003). Various approaches to information encoding and network design have been used. For example, Bohte and coauthors (2002b) considered a 2-layer feedforward network for data clustering and classification. Based on the idea proposed in Hopfield (1995) the authors implemented models of local receptive fields combining the properties of radial basis functions (RBF) and spiking neurons to convert input signals (classified data) having a floating-pointrepresentation into a spiking representation.
- Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture), designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.
- Another implementation is the TrueNorth processor from IBM. This processor contains 5.4 billion transistors, but is designed to consume very little power, only 70 milliwatts; most processors in personal computers contain about 1.4 billion transistors and require 35 watts or more. IBM refers to the design principle behind TrueNorth as neuromorphic computing. Its primary purpose is pattern recognition; while critics say the chip isn't powerful enough, its supporters point out that this is only the first generation, and the capabilities of improved iterations will become clear.
- The first commercial implementation of a hardware-accelerated spiking neural network system, was introduced by BrainChip in September 2017. BrainChip Accelerator is an 8-lane, PCI-Express add-in card that increases the speed and accuracy of the object recognition function of BrainChip Studio software (see above) by up to six times. The processing is done by six BrainChip Accelerator cores in a field-programmable gate array (FPGA). Each core performs fast, user-defined image scaling, spike generation, and spiking neural network comparison to recognize objects. In combination with a CPU , BrainChip Accelerator can process 16 channels of video simultaneously, with an effective throughput of over 600 frames per second. The low-power characteristics of BrainChip’s spiking neural technology results in only 15 watts total consumption. It's particularly suited to aiding law enforcement and intelligence organizations to rapidly search vast amounts of video footage and identify patterns or faces. The SNN technology enables the Hardware Accelerator to work on low-resolution video and requires only a 24x24 pixel image to detect and classify faces.
- Another hardware platform aimed at providing reconfigurable, general-purpose, real-time neural networks of spiking neurons is the Dynamic Neuromorphic Asynchronous Processor (DYNAP). DYNAP uses a unique combination of slow, low-power, inhomogeneous sub-thresholds analog circuits, and fast programmable digital circuits. This allows the implementation of real-time spike-based neural processing architectures in which memory and computation are co-localized, solving the von Neumann bottleneck problem and enabling real-time massively multiplexed communication of spiking events for realising massive networks. Recurrent networks, feed-forward networks, convolutional networks, attractor networks, echo-state networks, deep networks, and sensory fusion networks are few of the possibilities.
- Moreover, there is a hardware platform from Intel approving SNN. Loihi is a 60-mm chip fabricated in Intel's 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay product compared to conventional solvers running on a CPU isoprocess/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.
- Cognitive architecture
- Cognitive map
- Cognitive computer
- Computational neuroscience
- Neural coding
- Neural correlate
- Neural decoding
- Models of neural computation
- Motion perception
- Systems neuroscience
- Maass, Wolfgang (1997). "Networks of spiking neurons: The third generation of neural network models". Neural Networks. 10 (9): 1659–1671. doi:10.1016/S0893-6080(97)00011-7. ISSN 0893-6080.
- "简述脉冲神经网络Snn：下一代神经网络 - 机器之心 - Csdn博客".
- Wulfram Gerstner (2001). "Spiking Neurons". In Wolfgang Maass; Christopher M. Bishop. Pulsed Neural Networks. MIT Press. ISBN 978-0-262-63221-8.
- "简述脉冲神经网络Snn：下一代神经网络 - 机器之心 - Csdn博客".
- "简述脉冲神经网络Snn：下一代神经网络 - 机器之心 - Csdn博客".
- Alnajjar, F.; Murase, K. (2008). "A simple Aplysia-like spiking neural network to generate adaptive behavior in autonomous robots". Adaptive Behavior. 14 (5): 306–324. doi:10.1177/1059712308093869.
- X Zhang; Z Xu; C Henriquez; S Ferrari (Dec 2013). Spike-based indirect training of a spiking neural network-controlled virtual insect. Decision and Control (CDC), IEEE. pp. 6798–6805. CiteSeerX 10.1.1.671.6351. doi:10.1109/CDC.2013.6760966. ISBN 978-1-4673-5717-3.
- Tavanaei, Amirhossein; Ghodrati, Masoud; Kheradpisheh, Saeed Reza; Masquelier, Timothée; Maida, Anthony (March 2019). "Deep learning in spiking neural networks". Neural Networks. 111: 47–63. doi:10.1016/j.neunet.2018.12.002. PMID 30682710.
- Abbott, L. F.; Nelson, Sacha B. (November 2000). "Synaptic plasticity: taming the beast". Nature Neuroscience. 3 (S11): 1178–1183. doi:10.1038/81453. PMID 11127835.
- Atiya, A.F.; Parlos, A.G. (May 2000). "New results on recurrent network training: unifying the algorithms and accelerating convergence". IEEE Transactions on Neural Networks. 11 (3): 697–709. doi:10.1109/72.846741. PMID 18249797.
- Sutton RS, Barto AG (2002) Reinforcement Learning: An Introduction. Bradford Books, MIT Press, Cambridge, MA.
- Boyn, S.; Grollier, J.; Lecerf, G. (2017-04-03). "Learning through ferroelectric domain dynamics in solid-state synapses". Nature Communications. 8: 14736. Bibcode:2017NatCo...814736B. doi:10.1038/ncomms14736. PMC 5382254. PMID 28368007.
- Ponulak, F.; Kasinski, A. (2010). "Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification and spike-shifting". Neural Comput. 22 (2): 467–510. doi:10.1162/neco.2009.11-08-901. PMID 19842989.
- Pfister, Jean-Pascal; Toyoizumi, Taro; Barber, David; Gerstner, Wulfram (June 2006). "Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning". Neural Computation. 18 (6): 1318–1348. arXiv:q-bio/0502037. Bibcode:2005q.bio.....2037P. doi:10.1162/neco.2006.18.6.1318. PMID 16764506.
- Xin Jin; Furber, Steve B.; Woods, John V. (2008). "Efficient modelling of spiking neural networks on a scalable chip multiprocessor". 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). pp. 2812–2819. doi:10.1109/IJCNN.2008.4634194. ISBN 978-1-4244-1820-6.
- Markoff, John, A new chip functions like a brain, IBM says, New York Times, August 8, 2014, p.B1
- Sayenko, Dimitry G.; Vette, Albert H.; Kamibayashi, Kiyotaka; Nakajima, Tsuyoshi; Akai, Masami; Nakazawa, Kimitaka (March 2007). "Facilitation of the soleus stretch reflex induced by electrical excitation of plantar cutaneous afferents located around the heel". Neuroscience Letters. 415 (3): 294–298. doi:10.1016/j.neulet.2007.01.037. PMID 17276004.
- Schrauwen B, Campenhout JV (2004) Improving spikeprop: enhancements to an error-backpropagation rule for spiking neural networks. In: Proceedings of 15th ProRISC Workshop, Veldhoven, the Netherlands
- Indiveri, Giacomo; Corradi, Federico; Qiao, Ning (2015). "Neuromorphic architectures for spiking deep neural networks". 2015 IEEE International Electron Devices Meeting (IEDM). pp. 4.2.1–4.2.4. doi:10.1109/IEDM.2015.7409623. ISBN 978-1-4673-9894-7.
- Yamazaki, Tadashi; Tanaka, Shigeru (17 October 2007). "A spiking network model for passage-of-time representation in the cerebellum". European Journal of Neuroscience. 26 (8): 2279–2292. doi:10.1111/j.1460-9568.2007.05837.x. PMC 2228369. PMID 17953620.
- Davies, Mike; Srinivasa, Narayan; Lin, Tsung-Han; Chinya, Gautham; Cao, Yongqiang; Choday, Sri Harsha; Dimou, Georgios; Joshi, Prasad; Imam, Nabil; Jain, Shweta; Liao, Yuyun; Lin, Chit-Kwan; Lines, Andrew; Liu, Ruokun; Mathaikutty, Deepak; McCoy, Steven; Paul, Arnab; Tse, Jonathan; Venkataramanan, Guruguhanathan; Weng, Yi-Hsin; Wild, Andreas; Yang, Yoonseok; Wang, Hong (January 2018). "Loihi: A Neuromorphic Manycore Processor with On-Chip Learning". IEEE Micro. 38 (1): 82–99. doi:10.1109/MM.2018.112130359.
- Full text of the book Spiking Neuron Models. Single Neurons, Populations, Plasticity by Wulfram Gerstner and Werner M. Kistler (ISBN 0-521-89079-9)