Everyone familiar with recent developments in machine learning has heard of the second generation of artificial neural networks now being used for machine learning. These neural networks are usually fully connected, take continuous values, and output continuous values. This allows us to make breakthroughs in many areas, but because of biological accuracy, it doesn’t mimic the actual mechanisms of neurons in the brain.

 

The third generation of neural networks, impulsive neural networks (SNN), aims to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons for computing. There are fundamental differences between impulsive neural networks and neural networks in the machine learning community. Impulsive neural networks use pulses, which are based on discrete value activity rather than continuous value activity occurring at some point in time. The generation of a pulse is determined by differential equations representing various biological processes, the most important of which is the membrane potential of the neuron. Essentially, once the neuron reaches a certain potential, it fires, and the neuron gets reset. The most common model is the LIF model. In addition, pulsed neural networks are usually sparsely connected and use a specialized network topology.

 

Differential equations for generating membrane potential in the LIF model

 

A diagram of the membrane potential in an impulse

 

Network pulse training diagram of 3 neurons

 

A complete pulsed neural network

At first glance it looks like a step backwards, we’ve converted the continuous output to binary numbers, and these pulse training doesn’t explain it very well. Pulse training, however, improves the ability to process spatio-temporal data, that is, sensory data from the real world. Space means that neurons only connect to local neurons, so these neurons naturally process input blocks separately (similar to how CNN uses filters). Time refers to the passage of pulse training in time, to obtain the time information of the pulse, which is lost in binary coding. This allows us to process temporal data naturally without the additional complexity of the recurrent neural network. In fact, it turns out that pulsed neurons have more computational power than conventional artificial neurons.

In theory, pulsed neural networks are more powerful than second-generation networks, so why aren’t they widely used? At present, the main problem of pulse neural network is training. Although we already have unsupervised biological learning methods, such as Hebbian learning and STDR, there is no effective supervised training method to enable impulsive neural networks to have higher performance than second-generation networks. Since impulse training is non-differentiable, gradient descent cannot be used to train impulse neural networks without losing the particularly accurate time information in impulse training. Therefore, we need to develop an effective supervised learning method in order to apply impulsive neural networks to practical tasks correctly. This is a very difficult task because it involves the need to determine how the brain actually learns, to give these networks biological reality.

Another problem we had to solve was that simulating impulsive neural networks on standard hardware was computationally intensive because it required simulating differential equations. However, neuromimicking hardware such as IBM’s TrueNorth addresses this problem by aiming to simulate neurons using specific hardware that can take advantage of the discrete and sparse nature of their impulse behavior.

The future of pulsed neural network is not clear. On the one hand, it is the natural successor of recurrent neural network. On the other hand, it’s not a practical tool for most tasks. Pulsed neural networks have some practical applications in real-time image and audio processing, but there are still few literatures. Most papers on pulsed neural networks remain in the theoretical stage, or demonstrate their performance under a simple and fully connected second generation network. However, there are many teams working on PNN supervised learning rules, so I am optimistic about its future development.

The above is the translation.

This article is recommended by Beijing Post @ Love coco – Love life teacher, translated by Ali Yunqi Community organization.

Spiking Neural Networks, the Next Generation of Machine Learning

The article is briefly translated. For more detailed content, please check the attachment