Training SNN with discrete-time spike train

The logic of gain_bias and max_rates_intercepts are specific to the particular neuron model, providing a way of relating the distribution of weights (gain and bias as explained briefly in my last post) to some desired distribution of tuning curves. For example, with most neuron models in Nengo, one will typically specify a uniform distribution of intercepts tiled across [-1, 1], which correspond to the amount of normalized input current (i.e., unit-length vectors projected onto the encoders of the ensemble) required to make the neuron begin spiking. Meanwhile, one will typically specify a uniform distribution of max_rates, which correspond to the frequency of spiking for a normalized input current of 1 (again, unit-length vectors projected onto encoders). More details and pictures are available here: https://www.nengo.ai/nengo/examples/usage/tuning_curves.html.

In general, there will be some distribution of tuning curves that work well for the population, based on the space of inputs you’d like for it to represent, and the kinds of functions you’d like for it to support across that space. This distribution will have some correspondance with a multiplicative gain applied to each neuron’s postsynaptic weights, as well as the additive bias applied as an offset to each neuron’s input current. I would suggest exploring the Nengo documentation and referring to the source code for examples of how this is done for various neuron types. Let us know if you have a more specific question about something that you have tried or how this relates to your use case.