How to change the learning rate during training

Hi @YL0910!

The learning rules included in the default installation of Nengo don’t support the modification of the learning rate as part of the simulation (i.e., they are fixed throughout the entire simulation). However, most of the Nengo learning rules are of a form similar to the PES learning rule, i.e.,

\Delta \omega = -\kappa \mathbf{E} a

Or, in words: the change in the weights is proportional to the learning rate (\kappa) multiplied by some error signal (\mathbf{E}) multiplied by the activity of the neuron (a). Looking at the formulation above, we can see that modifying the learning rate by some scalar is equivalent to keeping the learning rate constant and modifying the error signal by that same scalar (because multiplication is commutative). Thus, if you are using the default Nengo learning rules, you can “modify the learning rate” by modifying the magnitude of the error signal (since the learning rate is static).

If you want to use more complex learning rules (that don’t take the formulation I stated above), or want a “proper” implementation of learning rate tempering, and alternative approach is to write a custom learning rule that supports a variable learning rate.