Thanks for the extra info, that certainly helps. Do you know if it’s necessary to train the three parameters (alpha, beta, gamma) for this task? Often we find that randomizing those parameters within some distribution, and then only training the readout weights (i.e., “decoders”) from the neural activity is sufficient – especially for datasets as simple as MNIST, but even for more complicated tasks like classifying surface textures by touch assuming appropriate encoding. The optimization that we typically perform in Nengo is this kind of training, where the “encoding” is fixed and the “decoding” is trained by solving for the optimal readout weights from the activities.
If you want to be able to train the encoding parameters then you need a method that can train across a layer, such as backpropagation. NengoDL allows you to take a Nengo model and then optimize it using TensorFlow for these purposes. Nengo also doesn’t force you to use any one kind of training method. If you have an alternate learning rule in mind, you can implement that learning rule in Nengo, either online or offline.
The two parameters of the custom neuron model (gain and bias) that you refer to are equivalent to the alpha and gamma parameters that we were talking about above. Typically we do not train these, but you could using NengoDL. The methods in the neuron model pertaining to gains and biases are simply relating these parameters to the distributions of tuning curves, so that Nengo can randomly initialize them. But you don’t need these methods if you’re not using them (i.e., if you manually specify initial values for gains and biases when you create the
nengo.Ensemble). The beta parameter is more difficult to train since it is equivalent to the weight on a recurrent connection from each neuron back to itself. You could train beta by expressing the model as a recurrent neural network in NengoDL.