How are the weights computed for the SNN?

For my work, I have been using NengoDL’s converter to convert the ANN model into an SNN. During this process I always save the parameters of the ANN to be used for the SNN implementation. However, I’m kind of curious as to how these parameters are being used in the SNN.

When it comes to converting ANN to SNNs there have been a few different methods introduced in order to figure out the correct weights and biases to be used in the SNN. For instance, there is ANN Max (Diehl et al. 2015), and ANN Percentile (Rueckauer et al. 2017) just to name a few. In both of these approaches they attempt to normalize and quantize the weights to make the conversion from ANN to SNN as loss-less as possible.

So basically when it comes down to my question, is there a certain scheme that Nengo uses to convert the weights from an ANN to an SNN or is there something completely different going on that I was not aware of? I’ve tried looking into this but haven’t found much so any insight on how the weights are computed/normalized within Nengo would be very beneficial for me, thanks.

Hi @mbaltes,

If you are using the NengoDL’s converter to convert an ANN into an SNN, then there is really nothing that Nengo does to parameters of the ANN to make it work in the SNN. The network parameters (weights, neuron biases, neuron gains, etc.) are trained as part of the TensorFlow training process (i.e., when you call sim.fit), and the converter doesn’t do anything to change these values.

Really, the only thing that the converter changes is a scaling factor that is applied to the input and output weights of the neuron ensembles. This is the scale_firing_rate parameter, and it is discussed in this NengoDL example. This parameter is used to boost the firing rates of the neurons in the network in the event that the trained weights do not cause the neurons fast enough to permit the flow of information through the network. This is because rate neurons allow information to flow through the network at any timestep (since the output of each neuron is a rate value), spiking networks only transmit information when neurons spike. Note that a SNN essentially becomes a rate-based network when the neurons are allowed to spike infinitely fast.

Another thing that the converter allows you to do is to change the synaptic filter on the output of each neuron. This isn’t changing the network parameters per-se, rather it just smooths out the spiking output to get it to “approximate” a rate value.

So, to answer your question, the NengoDL converter doesn’t change any parameters of the ANN to make it into an SNN. However, depending on the performance of the network (see the example), it may be necessary to scale the input and output weights (the converter will do this automatically with the scale_firing_rates parameter) to get the performance to your network to be a better approximate of the ANN.

1 Like