The LMU code uses the Keras RNN class to construct LMU layers.
The Keras RNN class doesn’t appear to support regularization.
Perhaps I have a misunderstanding about the topology of an LMU layer but if it is fully recurrent (all outputs feed back to all neurons) then an LMU layer with N neurons would have N^2 recurrent weights undergoing training which would be appropriate for regularization.
If so, how does one go about regularizing these weights?
Hi @jabowery, and welcome back to the Nengo forums!
The default installation (from pip) of keras-lmu doesn’t support regularization of the recurrent weights. However, there is an open pull request on the KerasLMU github that may have the functionality you require. To install this version of KerasLMU into your python environment, first uninstall KerasLMU from your Python environment, then do: