Executing SNNs on Loihi

Hello everyone,

I am learning to execute SNNs on Loihi chips and have been able to execute the tutorial “Converting a Keras model to an SNN on Loihi” on Nahuku32 board. I have two questions.

1> When we train the converted Nengo-DL network with LoihiSpikingRectifiedLinear() neurons as done below (in cell 12):

train(
    params_file="./keras_to_loihi_loihineuron_params",
    epochs=2,
    swap_activations={tf.nn.relu: nengo_loihi.neurons.LoihiSpikingRectifiedLinear()},
    scale_firing_rates=100,
)

how does the training proceed? The neurons are spiking, so does NengoDL use surrogate gradient descent, or any other form of training, or does it use a “rate” version of LoihiSpikingRectifiedLinear() neuron to train the network? I do know that when we use nengo.LIF() neurons to train the NengoDL network, it switches the nengo.LIF() with nengo.LIFRate() while training (to approximate the dynamics of LIF neurons).

2> I changed the network to have a MaxPool layer between the two Conv Layers. Architecture: Input -> Conv (spike generator) -> Conv -> MaxPool -> Conv -> Dense -> Dense and after doing the necessary configurations, when I execute my network with nengo_loihi.Simulator() with target set to “sim”, I get the following error:

----> 1 with nengo_loihi.Simulator(net_2, target="sim") as loihi_sim:
.
.
.
BuildError: Conv2D transforms not supported for off-chip to on-chip connections where `pre` is not a Neurons object.

which I believe is coming due to the pre object being a Node (i.e. the MaxPooling layer). I was under the assumption that the MaxPooling TensorNodes (or Nodes here in case of NengoLoihi) should be executed on GPU and then NengoLoihi should take care of mapping the Conv transform op back onto the chips/board. Any ideas on how to fix it? Or is it not at all supported in NengoLoihi?

Please let me know!

You can see the details of how it works here. During training, it does not use spikes at all. On the forward pass, it uses rate function that accounts for the hard voltage reset of the Loihi neurons (see specifically these three lines). On the backwards pass, it just uses the standard ReLU activation, with the slight modification that we add tau_ref1 = 0.5 * dt to the period to account for a bias in the Loihi rates due to the hard voltage reset (see the details here).

With regards to your second problem: All the input to the Loihi chip needs to be in spikes. For some types of connections, we can take a scalar/vector-valued input and turn it into spikes using on/off neuron encoding; however, for connections with convolution transforms, this is often not what you want. Therefore, we essentially force you to be explicit about how you want to encode those inputs into spikes.

In your case, does your MaxPooling layer output rates or spikes? If it outputs spikes already, then in theory we should be able to take those spikes and send them straight to the board for the convolution transform to happen on the board. However, we currently don’t have a system set up for you to indicate that in any way; basically, NengoLoihi assumes that the output of a Node is not spikes. Probably the easiest way to get around this is to have your node connect to an ensemble (using a custom neuron type if necessary) that you can then connect the .neurons attribute into your convolution connection.

Hello @Eric, thanks for pointing out the sources. What I briefly understood from the codes is that the forward pass is sort of spike-aware training where quantized Loihi firing-rates (e.g. 200Hz, 250Hz etc.) are generated (based on the set value of self.alpha and obtained value of period) and passed through the layers to be multiplied by weights. For an integer firing rate (i.e. 200Hz, 250Hz, etc.), I am assuming self.alpha can be set to 1 and period is an integer (i.e. 5, 4 respectively). Although I see that: period = tf.math.reciprocal(tf.maximum(J, self.epsilon)) i.e. period need not be necessarily an integer.

In the backward pass, corresponding (to Loihi rates) ReLU rates are calculated by the following formula: 1/(period + 0.0005) $\approx$ 0.25 for period=4, and then return the difference of Loihi rates and ReLU rates added to the ReLU rates, (e.g. (250 - 0.25) added to 0.25 = 250), although after accounting for the tf.stop_gradient over the difference of rates.

For now, it suffices my understanding that NengoDL uses rate approximation of LoihiSpikingRectifiedLinear() neuron for training the model, however, if you have some extra time to spare, I would love to know further details on how the above calculation of Loihi rates and ReLU rates are used for updating the weights. For example, you mention the following comment:

# rates + stop_gradient(loihi_rates - rates) =
#     loihi_rates on forward pass, rates on backwards

before returning the rates, but I see only one quantity returned instead of two types of rates.

With respect to the following:

I don’t exactly remember :sweat_smile: . The code is probably lost in the plethora of my other codes (and I couldn’t find it), but I do know that I had fixed this NengoDL bug for my use case such that MaxPooling now evaluated on synapsed spikes i.e. rates instead of simply taking a max over spikes (although both method works fine, probably due to enough sparsity). Therefore, I am not sure if I used my fixed code or the default NengoDL code for MaxPooling, i.e. whether it output spikes or rates. But thanks for pointing this out, for future reference I will keep this in mind.

This will lead to causally related spikes (due to MaxPooling rate-output) being fed to the next Convolutional layer and not the actual spikes/rates… right?

Hello @Eric, I was following the tutorials for building and executing SNNs on Loihi: MNIST tutorial, CIFAR10 tutorial, and Keras to Loihi SNN tutorial and had few questions with respect to them.

1> In both the MNIST and CIFAR10 tutorial, I see that while building Convolutional Ensembles, max_rate has been set to 100 and 150 respetively, but it hasn’t be explicitly set in the Keras to Loihi SNN tutorial. The default max_rates for the Ensembles is Uniform(low=200, high=400) and I believe, the same is maintained in Keras to Loihi SNN tutorial (although attempts are made to bring the firing rate around 200Hz, and maximum firing rate below 250Hz). Despite the high firing rates (and in the light of firing rate quantization), the Keras to Loihi SNN tutorial achieves good results, why/how is that so? What is the recommended max_rates?

2> In CIFAR10 tutorial, LoihiLIF() on-chip neurons have been used; in MNIST tutorial, SpikingRectifiedLinear() neuron has been used, which I believe is replaced internally with LoihiSpikingRectifiedLinear() when the network is run with NengoLoihi Simulator. In Keras to Loihi SNN tutorial, LoihiSpikingRectifiedLinear() is used explicitly to train/test the network. Therefore, I am confused on which type of neuron should I stick to? Is the neuron type going to affect the quantized firing rates on Loihi? Seems unlikely to me, given that LoihiSpikingRectifiedLinear() and LoihiLIF() neurons spike only once i.e. their amplitude is always 1/dt, even when scale_firing_rates is used.

3> This question is related to the above: how come despite using scale_firing_rates with LoihiSpikingRectifiedLinear() neurons, I see the spike values to be 1.0 always (spike values calculated by: loihi_sim.data[conv_probes] x scale_firing_rates x 0.001. I do know that from the code: output[:] = spikes_mask * (self.amplitude / dt), output[:] has to be 1000.0 for any spike due to the spike_mask (which is True/False), but when the output[:] is multiplied by 0.001 and scale_firing_rates, how is it still 1.0 (shouldn’t it be scale_firing_rates value)?

4> In case of MNIST and CIFAR10 tutorial where max_rate has been explicitly set, the amplitude of the corresponding neurons has also been set as 1/max_rate. Usually, the spike amplitude parameter is left unchanged to 1 (which results in spikes’ amplitude being: 1000.0) with the default max_rates set to Uniform(200, 400). So why do we modify the amplitude parameter to 1/max_rate (which would result in spikes’ amplitude = 1000.0/max_rate) when max_rates are changed to 100Hz/150Hz? I am having troubles understanding its maths later with scale_firing_rates etc. to calculate spikes in a timestep.

5> In my experience, I have seen that by using percentile regularization on firing rates, I don’t need the scale_firing_rates parameter with NengoDL Simulator. I see the same with CIFAR10 tutorial where scale_firing_rates parameter is not at all used when porting the network to Loihi (unlike in Keras to Loihi SNN tutorial). Is this the best recommended way to manage firing rates on Loihi?

Please let me know (and if possible… along with few extra details on the working of forward & backward pass with LoihiSpikingRectifiedLinear() in the context of my previous reply)!

  1. It’s really an empirical question what firing rates are best, and depends on the network/application, as well as the specific neuron type and refractory period. The Keras to Loihi SNN tutorial uses a completely different method of setting the firing rates, since it gets the rates from Keras. This is what much of the tutorial is exploring, by using the scale_firing_rates parameter. You’ll see the actual average firing rates of the neurons printed out; for example in this section they range from about 40 Hz to 200 Hz depending on the layer/example.

  2. You can use either neuron type.

  3. On Loihi, all spikes have the same magnitude, so no matter what you do, the value will always be 1.0. scale_firing_rates affects the firing rates, i.e. how many times a neuron spikes per second. The value of each spike remains the same.

  4. The reason we set the amplitude is so that the weights that we’ve trained outside of Nengo can be brought into Nengo. When you train a network in e.g. Keras, the ReLU function is f(x) = max(x, 0). If it suddenly becomes e.g. f(x) = max(150 * x, 0) in Nengo, because your neurons have a max rate of 150 Hz, then that would change what the network is computing. Setting the amplitude to 1 / 150 in this case is akin to making the activation function f(x) = max(150 * x, 0) / 150, which is equivalent to the original f(x) = max(x, 0). Of course, there are details I’m glossing over, but this is the basic intuition.

  5. If you train with firing rate regularization, then your trained network will learn to have activations in the correct range from the start, so you’re right, you don’t need any additional scaling. This is the method that I prefer, but it can be a bit harder to set up (you often need to choose the initial weights to have a larger magnitude, otherwise the neurons may have initial outputs that are vastly different from your target ones, and training will either be very slow or may fail completely).

Thank you @Eric for looking into it. I anyways made some progress in the mean time… but this above information still helps!

With respect to the following…

just to get it right… the max firing rate on Loihi can be 1000Hz (assuming one spike every timestep and 1000 time-steps in 1sec), irrespective of scale_firing_rates right? No matter what scale_firing_rates value we choose, we cannot get a neuron to spike more than 1000Hz. Isn’t it?

Also, I observed the same with few of my example networks:

Thanks for seconding it.

Yes, that is correct, Loihi neurons cannot fire more than once per timestep.

1 Like

Just leaving it here (and curious to hear others’ opinions as well!).

I suppose Intel recently announced the second generation of Loihi: Loihi 2 and following is the excerpt from here.

Other changes are very specific to spiking neural networks. The original processor's spikes, as mentioned above, only carried a single bit of information. In Loihi 2, a spike is an integer, allowing it to carry far more information and to influence how the recipient neuron sends spikes. (This is a case where Loihi 2 might be somewhat less like the neurons it's mimicking in order to perform calculations better.)

It appears to me that this improvement will alleviate the firing-rate quantization limitation of Loihi 1; since in Loihi 2, neurons can spike multiple times per time-step. Right?