This article has multiple issues. Please helpimprove it or discuss these issues on thetalk page.(Learn how and when to remove these messages) (Learn how and when to remove this message)
|
| Part of a series on |
| Machine learning anddata mining |
|---|
Learning with humans |
Model diagnostics |
Spiking neural networks (SNNs) areartificial neural networks (ANN) that mimic natural neural networks.[1] These models leverage timing of discrete spikes as the main information carrier.[2]
In addition toneuronal andsynaptic state, SNNs incorporate the concept of time into their operating model. The idea is thatneurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layerperceptron networks), but rather transmit information only when amembrane potential—an intrinsic quality of the neuron related to itsmembrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called aspiking neuron model.[3]
While spike rates can be considered the analogue of the variable output of a traditional ANN,[4] neurobiology research indicated that high speed processing cannot be performed solely through a rate-based scheme. For example humans can perform an image recognition task requiring no more than 10ms of processing time per neuron through the successive layers (going from theretina to thetemporal lobe). This time window is too short for rate-based encoding. The precise spike timings in a small set of spiking neurons also has a higher information coding capacity compared with a rate-based approach.[5]
The most prominent spiking neuron model is theleaky integrate-and-fire model.[6] In that model, the momentary activation level (modeled as adifferential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or—if the firing threshold is reached—the neuron fires. After firing, the state variable is reset to a lower value.
Various decoding methods exist for interpreting the outgoingspike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes.
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(December 2018) (Learn how and when to remove this message) |


Many multi-layer artificial neural networks arefully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs, they do not match biological networks and do not mimic neurons.[citation needed]
The biology-inspiredHodgkin–Huxley model of a spiking neuron was proposed in 1952. This model described howaction potentials are initiated and propagated. Communication between neurons, which requires the exchange of chemicalneurotransmitters in thesynaptic gap, is described in models such as theintegrate-and-fire model,FitzHugh–Nagumo model (1961–1962), andHindmarsh–Rose model (1984). The leaky integrate-and-fire model (or a derivative) is commonly used as it is easier to compute than Hodgkin–Huxley.[7]
While the notion of an artificial spiking neural network became popular only in the twenty-first century,[8][9][10] studies between 1980 and 1995 supported the concept. The first models of this type of ANN appeared to simulate non-algorithmic intelligent information processing systems.[11][12][13] However, the notion of the spiking neural network as a mathematical model was first worked on in the early 1970s.[14]
As of 2019 SNNs lagged behind ANNs in accuracy, but the gap is decreasing, and has vanished on some tasks.[15]
This articleneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Spiking neural network" – news ·newspapers ·books ·scholar ·JSTOR(November 2021) (Learn how and when to remove this message) |
Information in the brain is represented as action potentials (neuron spikes), which may group into spike trains or coordinated waves. A fundamental question of neuroscience is to determine whether neurons communicate by arate or temporal code.[16]Temporal coding implies that a single spiking neuron can replace hundreds of hidden units on a conventional neural net.[1]
SNNs define a neuron's current state as its potential (possibly modeled as adifferential equation).[17] An input pulse causes the potential to rise and then gradually decline. Encoding schemes can interpret these pulse sequences as a number, considering pulse frequency and pulse interval.[18] Using the precise time of pulse occurrence, a neural network can consider more information and offer better computing properties.[19]
SNNs compute in the continuous domain. Such neurons test for activation only when their potentials reach a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, accordingly raising or lowering their potentials.
The SNN approach produces a continuous output instead of the binary output of traditional ANNs. Pulse trains are not easily interpretable, hence the need for encoding schemes. However, a pulse train representation may be more suited for processing spatiotemporal data (or real-world sensory data classification).[20] SNNs connect neurons only to nearby neurons so that they process input blocks separately (similar toCNN using filters). They consider time by encoding information as pulse trains so as not to lose information. This avoids the complexity of arecurrent neural network (RNN). Impulse neurons are more powerful computational units than traditional artificial neurons.[21]
SNNs are theoretically more powerful than so called "second-generation networks" defined as ANNs "based on computational units that apply activation function with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs"; however, SNN training issues and hardware requirements limit their use. Although unsupervised biologically inspired learning methods are available such asHebbian learning andSTDP, no effective supervised training method is suitable for SNNs that can provide better performance than second-generation networks.[21] Spike-based activation of SNNs is not differentiable, thusgradient descent-basedbackpropagation (BP) is not available.
SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs.[22]
Pulse-coupled neural networks (PCNN) are often confused with SNNs. A PCNN can be seen as a kind of SNN.
Researchers are actively working on various topics. The first concerns differentiability. The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is not differentiable because a neuron's output is either 1 when it spikes, and 0 otherwise. This all-or-nothing behavior disrupts gradients and makes these neurons unsuitable for gradient-based optimization. Approaches to resolving it include:
The second concerns the optimization algorithm. Standard BP can be expensive in terms of computation, memory, and communication and may be poorly suited to the hardware that implements it (e.g., a computer, brain, or neuromorphic device).[23]
Incorporating additional neuron dynamics such as Spike Frequency Adaptation (SFA) is a notable advance, enhancing efficiency and computational power.[6][24] These neurons sit between biological complexity and computational complexity.[25] Originating from biological insights, SFA offers significant computational benefits by reducing power usage,[26] especially in cases of repetitive or intense stimuli. This adaptation improves signal/noise clarity and introduces an elementary short-term memory at the neuron level, which in turn, improves accuracy and efficiency.[27] This was mostly achieved usingcompartmental neuron models. The simpler versions are of neuron models with adaptive thresholds, are an indirect way of achieving SFA. It equips SNNs with improved learning capabilities, even with constrained synaptic plasticity, and elevates computational efficiency.[28][29] This feature lessens the demand on network layers by decreasing the need for spike processing, thus lowering computational load and memory access time—essential aspects of neural computation. Moreover, SNNs utilizing neurons capable of SFA achieve levels of accuracy that rival those of conventional ANNs,[30][31] while also requiring fewer neurons for comparable tasks. This efficiency streamlines the computational workflow and conserves space and energy, while maintaining technical integrity. High-performance deep spiking neural networks can operate with 0.3 spikes per neuron.[32]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(December 2018) (Learn how and when to remove this message) |
SNNs can in principle be applied to the same applications as traditional ANNs.[33] In addition, SNNs can model thecentral nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment.[34] Due to their relative realism, they can be used to studybiological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function,recordings of this circuit can be compared to the output of a corresponding SNN, evaluating the plausibility of the hypothesis. SNNs lack effective training mechanisms, which can complicate some applications, including computer vision.
When using SNNs for image based data, the images need to be converted into binary spike trains.[35] Types of encodings include:[36]
This sectionneeds additional citations forverification. Please helpimprove this article byadding citations to reliable sources in this section. Unsourced material may be challenged and removed.(December 2018) (Learn how and when to remove this message) |
A diverse range ofapplication software can simulate SNNs. This software can be classified according to its uses:

These simulate complex neural models. Large networks usually require lengthy processing. Candidates include:[37]
Efforts to implement hardware-based spiking neural networks (SNNs) began in the 1980s,[41] when researchers began exploring brain-inspired neuromorphic systems. In the following decades, advancements in semiconductor technologies enabled the development of several notable projects.[42]
One such project isSpiNNaker, developed at the University of Manchester, which utilizes millions of processing cores for large-scale simulation of spiking neurons.
TrueNorth, developed by IBM, is one of the first commercial neuromorphic chips, designed for energy-efficient and parallel processing.
Loihi, an Intel research chip, focuses on online learning and adaptability in neuromorphic modeling.
In research contexts, platforms such asBrainScaleS, developed in Europe, integrate analog and digital circuitry to accelerate neural simulations.Neurogrid, from Stanford University, was designed to efficiently simulate biological neurons and synapses. Dynap-se[43] models developed by iniLabs, are a family of low-power, event-based neuromorphic chip intended for use in robotics and Internet of Things (IoT) applications.
In addition, hardware based on memristors and other emerging memory technologies is being explored for the implementation of SNNs, with the goal of achieving lower power consumption and improved compatibility with biological neural models.[42]


Sutton and Barto proposed that future neuromorphic architectures[44] will comprise billions of nanosynapses, which require a clear understanding of the accompanying physical mechanisms. Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated domain reversal. Simulations showed that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towardsunsupervised learning.[45]

Classification capabilities of spiking networks trained according to unsupervised learning methods[46] have been tested on benchmark datasets such as Iris, Wisconsin Breast Cancer or Statlog Landsat dataset.[47][48] Various approaches to information encoding and network design have been used such as a 2-layer feedforward network for data clustering and classification. Based onHopfield (1995) the authors implemented models of local receptive fields combining the properties ofradial basis functions and spiking neurons to convert input signals having a floating-point representation into a spiking representation.[49][50]