Executing SNNs on Loihi

Hello everyone,

I am learning to execute SNNs on Loihi chips and have been able to execute the tutorial “Converting a Keras model to an SNN on Loihi” on Nahuku32 board. I have two questions.

1> When we train the converted Nengo-DL network with LoihiSpikingRectifiedLinear() neurons as done below (in cell 12):

train(
    params_file="./keras_to_loihi_loihineuron_params",
    epochs=2,
    swap_activations={tf.nn.relu: nengo_loihi.neurons.LoihiSpikingRectifiedLinear()},
    scale_firing_rates=100,
)

how does the training proceed? The neurons are spiking, so does NengoDL use surrogate gradient descent, or any other form of training, or does it use a “rate” version of LoihiSpikingRectifiedLinear() neuron to train the network? I do know that when we use nengo.LIF() neurons to train the NengoDL network, it switches the nengo.LIF() with nengo.LIFRate() while training (to approximate the dynamics of LIF neurons).

2> I changed the network to have a MaxPool layer between the two Conv Layers. Architecture: Input -> Conv (spike generator) -> Conv -> MaxPool -> Conv -> Dense -> Dense and after doing the necessary configurations, when I execute my network with nengo_loihi.Simulator() with target set to “sim”, I get the following error:

----> 1 with nengo_loihi.Simulator(net_2, target="sim") as loihi_sim:
.
.
.
BuildError: Conv2D transforms not supported for off-chip to on-chip connections where `pre` is not a Neurons object.

which I believe is coming due to the pre object being a Node (i.e. the MaxPooling layer). I was under the assumption that the MaxPooling TensorNodes (or Nodes here in case of NengoLoihi) should be executed on GPU and then NengoLoihi should take care of mapping the Conv transform op back onto the chips/board. Any ideas on how to fix it? Or is it not at all supported in NengoLoihi?

Please let me know!

You can see the details of how it works here. During training, it does not use spikes at all. On the forward pass, it uses rate function that accounts for the hard voltage reset of the Loihi neurons (see specifically these three lines). On the backwards pass, it just uses the standard ReLU activation, with the slight modification that we add tau_ref1 = 0.5 * dt to the period to account for a bias in the Loihi rates due to the hard voltage reset (see the details here).

With regards to your second problem: All the input to the Loihi chip needs to be in spikes. For some types of connections, we can take a scalar/vector-valued input and turn it into spikes using on/off neuron encoding; however, for connections with convolution transforms, this is often not what you want. Therefore, we essentially force you to be explicit about how you want to encode those inputs into spikes.

In your case, does your MaxPooling layer output rates or spikes? If it outputs spikes already, then in theory we should be able to take those spikes and send them straight to the board for the convolution transform to happen on the board. However, we currently don’t have a system set up for you to indicate that in any way; basically, NengoLoihi assumes that the output of a Node is not spikes. Probably the easiest way to get around this is to have your node connect to an ensemble (using a custom neuron type if necessary) that you can then connect the .neurons attribute into your convolution connection.

Hello @Eric, thanks for pointing out the sources. What I briefly understood from the codes is that the forward pass is sort of spike-aware training where quantized Loihi firing-rates (e.g. 200Hz, 250Hz etc.) are generated (based on the set value of self.alpha and obtained value of period) and passed through the layers to be multiplied by weights. For an integer firing rate (i.e. 200Hz, 250Hz, etc.), I am assuming self.alpha can be set to 1 and period is an integer (i.e. 5, 4 respectively). Although I see that: period = tf.math.reciprocal(tf.maximum(J, self.epsilon)) i.e. period need not be necessarily an integer.

In the backward pass, corresponding (to Loihi rates) ReLU rates are calculated by the following formula: 1/(period + 0.0005) $\approx$ 0.25 for period=4, and then return the difference of Loihi rates and ReLU rates added to the ReLU rates, (e.g. (250 - 0.25) added to 0.25 = 250), although after accounting for the tf.stop_gradient over the difference of rates.

For now, it suffices my understanding that NengoDL uses rate approximation of LoihiSpikingRectifiedLinear() neuron for training the model, however, if you have some extra time to spare, I would love to know further details on how the above calculation of Loihi rates and ReLU rates are used for updating the weights. For example, you mention the following comment:

# rates + stop_gradient(loihi_rates - rates) =
#     loihi_rates on forward pass, rates on backwards

before returning the rates, but I see only one quantity returned instead of two types of rates.

With respect to the following:

I don’t exactly remember :sweat_smile: . The code is probably lost in the plethora of my other codes (and I couldn’t find it), but I do know that I had fixed this NengoDL bug for my use case such that MaxPooling now evaluated on synapsed spikes i.e. rates instead of simply taking a max over spikes (although both method works fine, probably due to enough sparsity). Therefore, I am not sure if I used my fixed code or the default NengoDL code for MaxPooling, i.e. whether it output spikes or rates. But thanks for pointing this out, for future reference I will keep this in mind.

This will lead to causally related spikes (due to MaxPooling rate-output) being fed to the next Convolutional layer and not the actual spikes/rates… right?