Hi @mbaltes, and welcome back to the Nengo forums!

There is a lot of nuance to the question you are asking, so I’ll try and hit some of the basic points in my response, but feel free to ask further questions for clarification.

At the very basic level, in NengoDL (and in pretty much all of the Nengo software), spiking neurons operate similarly to biological neurons. That is to say, input to the neuron (be it some signal generated by some input node, or a spike train filtered by some synapse) is treated as input current to the neuron. This input current is then put through the neuron’s activation function which updates a model of the neuron’s internal state (i.e., membrane voltages, etc.) that then determines if the neuron produces a spike or not. The rate at which a neuron spikes, and the correlation of the spike rate to the input current is determined by the neuron’s activation function (i.e., the type of neuron being used). Some neuron types, like the LIF neuron, have a non-linear activation function, while other neuron types, like the ReLU neuron, have a linear activation function.

Understanding how the information is translated into spikes, though, is more complex. The typical process for constructing a model in NengoDL is to build the model (in TF, or directly in NengoDL), and then train the model with some data using `sim.fit`

. During the training process, the weights in the model are adjusted such that some desired input-to-output mapping is achieved. In NengoDL, this training step uses the rate-mode versions of the neurons to exclude the additional noise and difficulty spiking neurons will introduce into the training.

During the running phase (either in `sim.predict`

or `sim.run`

), the rate neurons are swapped out for the spiking versions and the process I described above (about how biological neurons operate) is used to determine whether or not the neuron spikes. The thought here is that if provided with some fixed input, the average spike rate of a spiking neuron should closely approximate the rate-mode equivalent of said neuron, assuming everything else (input & output weights, gains and biases, etc.) is kept constant. This is why we can just do the direct swap between training and running phase.

In practice however, if the firing rates of the neurons are too low, during some simulation time window, there may not be enough spikes produced in the network to propagate the necessary information through the entire network to produce a valid output (because when a neuron is not spiking, no information is being transferred). For this reason, during training, we tend to regularize the firing rate to some value (typically in the 100’s of Hz) that the spiking network can work with. If you are converting a TF network to a NengoDL network, there is the `scale_firing_rates`

parameter in the NengoDL converter that “boosts” the input current to each neuron (thereby making the effective firing rate higher) with a corresponding inverse multiple on the output weights to account for the increased firing rates. The increase in firing rate ensures more information flow, while the inverse gain on the output weights corrects for this increased activity that was not present in the training model (basically, scale \times \frac{1}{scale} = 1).

In Nengo, and NengoDL, you can also construct models that do not have to be trained. Instead, the network weights are determined by the function you specify on the connections, and the NEF algorithm. The NEF algorithm is a lot to describe, so I won’t go through it here, but we do have a series of YouTube videos explaining it, as well as a description on the Nengo documentation page.

I hope this answers your questions!