How are the weights computed for the SNN?

Hi @mbaltes,

If you are using the NengoDL’s converter to convert an ANN into an SNN, then there is really nothing that Nengo does to parameters of the ANN to make it work in the SNN. The network parameters (weights, neuron biases, neuron gains, etc.) are trained as part of the TensorFlow training process (i.e., when you call sim.fit), and the converter doesn’t do anything to change these values.

Really, the only thing that the converter changes is a scaling factor that is applied to the input and output weights of the neuron ensembles. This is the scale_firing_rate parameter, and it is discussed in this NengoDL example. This parameter is used to boost the firing rates of the neurons in the network in the event that the trained weights do not cause the neurons fast enough to permit the flow of information through the network. This is because rate neurons allow information to flow through the network at any timestep (since the output of each neuron is a rate value), spiking networks only transmit information when neurons spike. Note that a SNN essentially becomes a rate-based network when the neurons are allowed to spike infinitely fast.

Another thing that the converter allows you to do is to change the synaptic filter on the output of each neuron. This isn’t changing the network parameters per-se, rather it just smooths out the spiking output to get it to “approximate” a rate value.

So, to answer your question, the NengoDL converter doesn’t change any parameters of the ANN to make it into an SNN. However, depending on the performance of the network (see the example), it may be necessary to scale the input and output weights (the converter will do this automatically with the scale_firing_rates parameter) to get the performance to your network to be a better approximate of the ANN.

1 Like