Spiking Activation

Hi Nengo team!

I have been poring over the SpikingActivation lately, as well as the examples (namely Classifying Fashion MNIST with spiking activations). Traditional Sigmoid, ReLu, etc., activation functions all have explicit formulas. So I searched for the specific recipe of SpikingActivation but still haven’t found it. I am hoping to get a better understanding of how this activation function works. Specifically, my questions are:

  • What is the specific formula of SpikingActivation?
  • Is there any difference in the formula between FPGA and Loihi, etc.?

I believe those are all the questions I have for now. I hope I’m not asking too much of the Neng’ gang. :see_no_evil:

Thanks in advance, and happy weekend!

Hello, and welcome to the forum!

A good reference for this is [2002.03553] A Spike in Performance: Training Hybrid-Spiking Neural Networks with Quantized Activation Functions (where it appears as Algorithm 1). Interestingly, this is cited in Nengo’s RegularSpiking neuron type (which is the Nengo equivalent of SpikingActivation), but not in SpikingActivation, so we’ll look at fixing that.

It’s basically a general formula that can take any continuous static (rate-based) activation function, and turn it into a spiking equivalent. The main disadvantage is that it requires that you be able to evaluate the static activation function on whatever hardware you’re running on. For some neurons, this is fairly easy (e.g. for ReLU it’s just a f(x) = max(x, 0) function), but for others it can be more complex. For this reason, it does mean that different hardware might have to support it in different ways, or might not be able to support it at all (for example, Loihi would not be able to support SpikingActivation with tf.math.tanh as the underlying function, because it doesn’t have a way to compute the tanh function).

1 Like