Hi,

I’m reading your paper “Training Spiking Deep Networks for Neuromorphic Hardware”. It suggests first train the network with ANN, then take the parameters to connect spiking neurons, and continue the training with SNN. Does Nengo currently using the methods mentioned in the paper to train SNN? Does it necessary to first train with ANN? Can I directly train SNN with Nengo, does the performance differs significantly from convert and train? I haven’t found this information on Nengo documentation. Please verify. Thank you

Hello @victkid and welcome!

These features are supported by Nengo-DL. You can find an example here for training an SNN via backpropagation, which references this same paper.

The example trains with an ANN and then converts it into an SNN for inference. This conversion is done automatically for you, and the example provides more details. For a feed-forward network such as this one, the difference in performance (between ANN and SNN) depends on how many neurons there are, how frequently they spike, and the time-constants of the synaptic filters. In particular, a faster time-constant produces a lower latency result with higher variability from spike noise, while a slower time-constant has higher latency but less variance. The expected values of the weighted and filtered spike trains remains the same between the ANN and the SNN (in a statistical sense, due to the pooling across all of the neurons and the averaging across time). These trade-offs receive some mathematical treatment in Voelker (2019).

There are also ways you can use Nengo-DL to train directly on spikes in the forward pass, with a custom gradient for the backwards pass for the backpropagation algorithm. This is especially beneficial when training spiking RNNs, since the variance in the spike trains tend to propagate through the recurrent connections. We aim to publish some examples of this kind of training soon. In this context, the dynamics of the synaptic filter also end up playing a useful role that can be exploited computationally (rather than the feed-forward case of trading increased latency for decreased variability), as in the Nengo examples for building dynamical systems.

Hello @arvoelke,

I was reading the question and the answer above since it is exactly was I was looking for, but still I am confused on one thing. The modifications proposed in the paper (removing local contrast normalization, max pooling and applying noise), to be implemented prior to ANN training, are not implemented by the converter but by the user, right?

Thank you

Hi @ntouev,

From my knowledge of the ANN-to-SNN converter in NengoDL, yes, you are correct. The techniques proposed in the paper have to be implemented by the user as part of the network’s architecture prior to the ANN training process. All the converter does is translate the weights and biases in the ANN to the appropriate weights and biases in the SNN.