# LMU RNN Layer Regularization?

The LMU code uses the Keras RNN class to construct LMU layers.
The Keras RNN class doesn’t appear to support regularization.

Perhaps I have a misunderstanding about the topology of an LMU layer but if it is fully recurrent (all outputs feed back to all neurons) then an LMU layer with N neurons would have N^2 recurrent weights undergoing training which would be appropriate for regularization.

If so, how does one go about regularizing these weights?

I’m trying this (brute-forcing a global kernel_regularizer):

def delay_layer(units, **kwargs):
layer = RNN(LMUCell(units=units,
order=4,
theta=4,
),
return_sequences=True,
**kwargs)
layer.regularizers=[kernel_regularizer]
return layer

Hi @jabowery, and welcome back to the Nengo forums!

The default installation (from pip) of keras-lmu doesn’t support regularization of the recurrent weights. However, there is an open pull request on the KerasLMU github that may have the functionality you require. To install this version of KerasLMU into your python environment, first uninstall KerasLMU from your Python environment, then do:

git clone https://github.com/nengo/keras-lmu
cd keras-lmu
git checkout bias-regularizers
pip install -e .


As a note, I haven’t used this functionality myself, so I cannot say with certainty if this code will give you the functionality you want.

The terminology used for various kernels is unfamilar to me but one of them is likely to be the recurrent weights. Thanks!

If I recall correctly, there can be multiple types of recurrent connections in an LMU layer. If you look at the schematic of an LMU cell on this page:

• $e_h$ and $\mathbf{W}_h$ are the encoders and kernel for the hidden layer, respectively
• $e_m$ and $\mathbf{W}_m$ are the encoders and kernel for the memory, respectively
• $e_x$ and $\mathbf{W}_x$ are the encoders and kernel for the input, respectively

Hopefully, you should be able to identify which recurrence you want to regularize.