The KerasLMU team is happy to announce the release of KerasLMU 0.4.0.
KerasLMU is a Keras-based implementation of Legendre Memory Units, a novel memory cell for recurrent neural networks that dynamically maintains information across long windows of time using relatively few resources. It has been shown to perform better than standard LSTM or other RNN-based models in a variety of tasks, generally with fewer internal parameters (see the paper for more details).
KerasLMU is built using the standard Keras RNN API. If you have a model containing an RNN layer, such as
tf.keras.layers.RNN(tf.keras.layers.LSTMCell(...)), those can be swapped with
More information on the available parameters and configuration options can be found in the documentation.
The main new feature in this release is the option to learn the
theta parameter of the LMU during training. This comes with an option to choose which discretization method to use when computing the A/B matrices (this is mainly useful in combination with
trainable_theta=True, where setting
discretizer="euler" may improve the training speed, possibly at the cost of some accuracy).
This release also includes a number of improvements to the feedforward LMU implementation (which has been renamed from
LMUFeedforward). It now supports
memory_d > 1, meaning that the feedforward implementation can be used in a wider variety of LMU models. There is also a new
conv_mode option, which can be used to switch to a raw convolution (rather than FFT-based) implementation, which may be faster depending on the model. Note that if you are using
keras_lmu.LMU(...), then KerasLMU will automatically use
LMUFeedforward under the hood if the necessary conditions are met (
memory_to_memory=False, and fixed sequence length – i.e., the shape of the time axis in the inputs is not
Check out the GitHub release page for a full changelog.
To install KerasLMU, we recommend using
pip install keras-lmu
More detailed installation instructions can be found here.
You’re already there! If you have an issue upgrading or have any other questions, please post them in this forum.