There is no “ideal” spiking firing rate that will apply to all NengoDL models, and finding the scale_firing_rate
value that works for your model involves a lot of trial and error.
You will find that, generally, as you increase the value of scale_firing_rates
, the performance of the network will increases. However, you will observe a point where there is minimal increase in model accuracy even with an increase in the scale_firing_rates
value. At this point, you may consider the scale_firing_rates
value to be “good enough”, but it’s really up to the designer of the model to decide what point this should be. Particularly, if you are implementing the model on actual spiking hardware, you’ll want to consider if the increase in performance is worth the increase in power draw due to the extra spiking happening in the network.
The scale_firing_rates
value also depend on the specifics of the task that the model is being employed to solve. For some tasks, you can get good accuracy with lower firing rates. For others (particularly ones that present inputs over longer periods) you might find that a larger scale_firing_rates
value (along with synaptic filtering) is needed to retain and propagate the correct information through the network to get desirable results. Once again, it’s one of those instances where you’ll need to experiment with your network and with the scale_firing_rates
parameter to figure out what is “optimal” for your network and task.
This is only true for some neuron types (like the LIF neuron), where the maximum number of times that it can spike is once per timestep. However, neurons like the SpikingRectifiedLinear
neuron (see your previous question about NengoDL) are able to spike multiple times per timestep, which means that the scale_firing_rates
values for such neurons have no upper bound, since increasing the scale_firing_rates
parameter just makes it spike more times per timestep.