Leaky Integrate-and-Fire
About one year ago I attended a conference where one professor presented the work of her team on simulating real neurons. I was curious about how body neurons can be modeled and I have taken the opportunity of a long train journey to browse a bit.
I stared my search looking for “Spiking neural network”. The most surprising thing I learned is that real neurons seems do nothing most of the time. Suddenly they can fire a spike to alert other neurons, but usually they seem idle. The frequency at which they fire is quite low, and this is probably the reason why the brain consumes so few energy. If you search on google you will quickly see that this frequency is usually less than 1kHz, usually much less. By converse, the “artificial neurons” we use in deep learning models fire continuously and synchronously at gigahertz rates: every parameter need to be evaluated at each clock signal, the output can be zero but we need to recalculate the output of each neuron in the network.
Biological neurons seems modeled as a Capacitor with a reset circuit close to it. Each time an input spike is received, the capacitor charge increases. When the charge is high enough, a spike is emitted and the charge is reset. A more realistic model, the leaky integrate-and-fire, puts also a resistance in parallel of the capacitor. It’s role is to discharge slowly the capacitor, to clear the effect of old spikes received too long ago. Input spikes are significant only if they are received in a short time interval.
On this tutorial you will see a good presentation that describes the models and the formulas: https://compneuro.neuromatch.io/tutorials/W2D3_BiologicalNeuronModels/student/W2D3_Tutorial1.html
A biological neuron is composed of dendrides, which are root like structures that receives the inputs from other neurons. The dendrides enters the body of the neuron cell, from which an axon exits: the axon is the output connector that will propagate the spike to thousand of other neurons. Axons and dendrides do not touch directly, they are connected via synapses.

The leaky integrate-and-fire model consists in this formula:

The output current of the neuron depends on 2 components: the leaky part is on the left (with the R internal resistor) and the input driven part with the C constant. u_rest is the tension at rest of the output, when we have not enough input spikes.
The presentation is suggesting some other interesting points.
The spikes are all the same: the shape is the same, in 1-2 ms the spike ends. Seems that it is just important that the spike happened, there is not a concept of high or low spike. By converse artificial neurons have an output that is a real number, maybe normalized but with variability, not just 1 or 0.
There exists more sophisticated models that can explain another phenomenon called adaptation. Suppose you present the same constant input to a neuron, with the leaky integrate-and-fire model the neuron will charge, emit a spike, recharge, emit another spike… So it will generate an output sequence with a specific frequency. Seems that nature do not likes this behavior: a constant signal do not bring much information, real neurons will emit spikes, but with a frequency that decreases. The spikes will be there but less and less frequent.
But how do biological neuron learn? with artificial neurons you have algorithms that uses the gradient and adapt the neuron’s coefficients to learn producing the correct output. I ask myself if a model exists to explain how the dendrides and axon change in response to all these spikes. Maybe a good topic to explore in another train journey.
Leave a comment