Importance of Spiking in Nengo

In what ways is having spiking neurons, instead of just using rate neurons using LIFRate() (which from what I understand sends out a continuous value of firing rate instead of spikes), useful?

The only cases I can come up with are:

  • Matching empirical data
  • Noise in spiking is different
  • Sending spiking data with hardware saves power

But are there any functional benefits to spikes over just sending out a continuous rate value?

For the particular case of LIF vs LIFRate, I think you’re entirely right that those are the three reasons.

In order to get functional benefits out of spikes, you need a more complex neuron model. For example, an Izhikevich neuron in bursting mode will produce bursts of output given a constant input. This means that it would be possible to decode wacky functions in a purely feedforward system (that with LIF or LIFRate you would require a recurrent connection).

If you’re doing some sort of learning with spike time dependent plasticity (STDP), you’ll probably need spikes. But I believe currently we don’t have any learning rule that implements true STDP?

I’ve done doublet and triplet STDP for some ABR stuff… I used the doublet rule for a summer school example (here for those that have access). Though I guess you could argue that it’s not “true STDP” because it works on LIFRate neurons too (it uses filtered activities, whether they’re rates or spikes).

The only additional thing that I can think of is that there might be subtle timing differences between spiking and non-spiking in terms of how long it takes to inhibit an ensemble, reach some target value, or integrate to some point… but that’s probably more dependent on synaptic filters than the neuron model.

I think this is an important and maybe under-appreciated point. The dynamics of computation can significantly change with spiking neurons, especially with short synaptic time constants. I think this isn’t something we have necessarily explored much, but one simple example I was playing around with a little while ago is lateral inhibition. Lateral inhibition networks are often criticized for being slow to converge, particularly if two nodes have very similar input. For example, if you’ve got a network with three rate neurons, with inputs [1.0, 0.95, 0.5], the first neuron should eventually win out, but it could take a while, because the second and third neurons are also going to be inhibiting the first, and it will take a while to settle. If you use spiking neurons, however, the first neuron will spike first, and if you have fast inhibition, very quickly inhibit the other two neurons. If the inhibition is fast enough, the other two neurons might never spike at all, allowing the first neuron to win out very quickly.

1 Like

I believe the neural evidence actually suggests that it’s not spiking per se that matters, but membrane voltages – so this really is ‘STDP’. (i.e. if you prevent a post synaptic cell from spiking it can still exibit stdp-like changes if you induce the right membrane voltage changes).

1 Like

Why does the error variates on the same input in the three modes I.e. direct, spiking, and rate mode.? If direct mode proves to give the accurate values on simpler models then why we use spiking mode.

For the reasons I mentioned in the original post. Assuming your primary goal isn’t minimising error, you might need to:

  • Match empirical spiking neurological data
  • Use spiking-specific noise patterns
  • Use hardware that can only using spiking neurons

Another reason to use spiking (or rate) mode is that it, surprisingly, can be faster under certain circumstances.

  • This can be the case when using nengo_ocl, the OpenCL backend for Nengo.
  • This can be the case with new optimizations for the reference backend that hopefully will be included in one of the upcoming Nengo versions.
  • This can be the case on specialized hardware like SpiNNaker specifically designed to simulate neural networks in real time.

It occurred to me that there is another reason why it might be important to consider spiking neurons: Under the assumption that the tuning curves of the neurons are distributed such that error in the representation is minimized, one gets different distributions for the tuning curves when considering rate neurons (no noise) vs spiking neurons (noise). The distribution of tuning curves (primarily, but not solely, controlled by the intercept distribution in Nengo) affects the distortion of the representation (present for both spiking and rate neurons) differently than the noise (only present in the spiking neurons and dependent on neuron number and maximum firing rates).