The LMU code uses the Keras RNN class to construct LMU layers.
The Keras RNN class doesn’t appear to support regularization.
Perhaps I have a misunderstanding about the topology of an LMU layer but if it is fully recurrent (all outputs feed back to all neurons) then an LMU layer with N neurons would have N^2 recurrent weights undergoing training which would be appropriate for regularization.
If so, how does one go about regularizing these weights?