In what ways is having spiking neurons, instead of just using rate neurons using LIFRate() (which from what I understand sends out a continuous value of firing rate instead of spikes), useful?
The only cases I can come up with are:
Matching empirical data
Noise in spiking is different
Sending spiking data with hardware saves power
But are there any functional benefits to spikes over just sending out a continuous rate value?
For the particular case of LIF vs LIFRate, I think youâre entirely right that those are the three reasons.
In order to get functional benefits out of spikes, you need a more complex neuron model. For example, an Izhikevich neuron in bursting mode will produce bursts of output given a constant input. This means that it would be possible to decode wacky functions in a purely feedforward system (that with LIF or LIFRate you would require a recurrent connection).
If youâre doing some sort of learning with spike time dependent plasticity (STDP), youâll probably need spikes. But I believe currently we donât have any learning rule that implements true STDP?
Iâve done doublet and triplet STDP for some ABR stuff⌠I used the doublet rule for a summer school example (here for those that have access). Though I guess you could argue that itâs not âtrue STDPâ because it works on LIFRate neurons too (it uses filtered activities, whether theyâre rates or spikes).
The only additional thing that I can think of is that there might be subtle timing differences between spiking and non-spiking in terms of how long it takes to inhibit an ensemble, reach some target value, or integrate to some point⌠but thatâs probably more dependent on synaptic filters than the neuron model.
I think this is an important and maybe under-appreciated point. The dynamics of computation can significantly change with spiking neurons, especially with short synaptic time constants. I think this isnât something we have necessarily explored much, but one simple example I was playing around with a little while ago is lateral inhibition. Lateral inhibition networks are often criticized for being slow to converge, particularly if two nodes have very similar input. For example, if youâve got a network with three rate neurons, with inputs [1.0, 0.95, 0.5], the first neuron should eventually win out, but it could take a while, because the second and third neurons are also going to be inhibiting the first, and it will take a while to settle. If you use spiking neurons, however, the first neuron will spike first, and if you have fast inhibition, very quickly inhibit the other two neurons. If the inhibition is fast enough, the other two neurons might never spike at all, allowing the first neuron to win out very quickly.
I believe the neural evidence actually suggests that itâs not spiking per se that matters, but membrane voltages â so this really is âSTDPâ. (i.e. if you prevent a post synaptic cell from spiking it can still exibit stdp-like changes if you induce the right membrane voltage changes).
Why does the error variates on the same input in the three modes I.e. direct, spiking, and rate mode.? If direct mode proves to give the accurate values on simpler models then why we use spiking mode.
It occurred to me that there is another reason why it might be important to consider spiking neurons: Under the assumption that the tuning curves of the neurons are distributed such that error in the representation is minimized, one gets different distributions for the tuning curves when considering rate neurons (no noise) vs spiking neurons (noise). The distribution of tuning curves (primarily, but not solely, controlled by the intercept distribution in Nengo) affects the distortion of the representation (present for both spiking and rate neurons) differently than the noise (only present in the spiking neurons and dependent on neuron number and maximum firing rates).