How are spikes generated within NengoDL?

Hello, I’ve been working with NengoDL for a little bit now and I’m just trying to wrap my head around a few things.

Something that I keep wondering about is how are spikes generated within NengoDL. I understand that the input is flattened and then fed into a spiking ensemble as a continuous input to generate a spiking output but how exactly is this spiking output determined?

In other implementations of SNNs that I’ve seen the input is encoded using some technique like rate coding, time to first spike, etc… and then the spikes are determined by that encoding. Do the spiking ensembles in Nengo work in a similar way or is there something different going on under the hood that I am not aware of? Overall I’m curious on where these spikes are coming from exactly. If someone could help me out and clear this up for me it would be greatly appreciated, thank you.

Hi @mbaltes, and welcome back to the Nengo forums! :smiley:

There is a lot of nuance to the question you are asking, so I’ll try and hit some of the basic points in my response, but feel free to ask further questions for clarification.

At the very basic level, in NengoDL (and in pretty much all of the Nengo software), spiking neurons operate similarly to biological neurons. That is to say, input to the neuron (be it some signal generated by some input node, or a spike train filtered by some synapse) is treated as input current to the neuron. This input current is then put through the neuron’s activation function which updates a model of the neuron’s internal state (i.e., membrane voltages, etc.) that then determines if the neuron produces a spike or not. The rate at which a neuron spikes, and the correlation of the spike rate to the input current is determined by the neuron’s activation function (i.e., the type of neuron being used). Some neuron types, like the LIF neuron, have a non-linear activation function, while other neuron types, like the ReLU neuron, have a linear activation function.

Understanding how the information is translated into spikes, though, is more complex. The typical process for constructing a model in NengoDL is to build the model (in TF, or directly in NengoDL), and then train the model with some data using sim.fit. During the training process, the weights in the model are adjusted such that some desired input-to-output mapping is achieved. In NengoDL, this training step uses the rate-mode versions of the neurons to exclude the additional noise and difficulty spiking neurons will introduce into the training.

During the running phase (either in sim.predict or sim.run), the rate neurons are swapped out for the spiking versions and the process I described above (about how biological neurons operate) is used to determine whether or not the neuron spikes. The thought here is that if provided with some fixed input, the average spike rate of a spiking neuron should closely approximate the rate-mode equivalent of said neuron, assuming everything else (input & output weights, gains and biases, etc.) is kept constant. This is why we can just do the direct swap between training and running phase.

In practice however, if the firing rates of the neurons are too low, during some simulation time window, there may not be enough spikes produced in the network to propagate the necessary information through the entire network to produce a valid output (because when a neuron is not spiking, no information is being transferred). For this reason, during training, we tend to regularize the firing rate to some value (typically in the 100’s of Hz) that the spiking network can work with. If you are converting a TF network to a NengoDL network, there is the scale_firing_rates parameter in the NengoDL converter that “boosts” the input current to each neuron (thereby making the effective firing rate higher) with a corresponding inverse multiple on the output weights to account for the increased firing rates. The increase in firing rate ensures more information flow, while the inverse gain on the output weights corrects for this increased activity that was not present in the training model (basically, scale \times \frac{1}{scale} = 1).

In Nengo, and NengoDL, you can also construct models that do not have to be trained. Instead, the network weights are determined by the function you specify on the connections, and the NEF algorithm. The NEF algorithm is a lot to describe, so I won’t go through it here, but we do have a series of YouTube videos explaining it, as well as a description on the Nengo documentation page.

I hope this answers your questions!