How to change the learning rate during training

I have a question whether Nengo could change the learning rate during training? For example, start learning with a large learning rate to explore the range of possible optimal solutions, then reduce the learning rate to gradually explore to the optimal solution.
I know that such an algorithm exists in ANN, I wonder if such a mechanism exists in Nengo to change the learning rate during training? Or is there an example of a similar alternative solution?
Any good ideas or examples are welcome! :smile:

Hi @YL0910!

The learning rules included in the default installation of Nengo don’t support the modification of the learning rate as part of the simulation (i.e., they are fixed throughout the entire simulation). However, most of the Nengo learning rules are of a form similar to the PES learning rule, i.e.,

\Delta \omega = -\kappa \mathbf{E} a

Or, in words: the change in the weights is proportional to the learning rate (\kappa) multiplied by some error signal (\mathbf{E}) multiplied by the activity of the neuron (a). Looking at the formulation above, we can see that modifying the learning rate by some scalar is equivalent to keeping the learning rate constant and modifying the error signal by that same scalar (because multiplication is commutative). Thus, if you are using the default Nengo learning rules, you can “modify the learning rate” by modifying the magnitude of the error signal (since the learning rate is static).

If you want to use more complex learning rules (that don’t take the formulation I stated above), or want a “proper” implementation of learning rate tempering, and alternative approach is to write a custom learning rule that supports a variable learning rate.