Spiking LSTM layers in Nengo Loihi


I’m currently working on anomaly detection in time series data with LSTM autoencoders.
I was wondering about applying SNNs for this project because their properties might be interesting in time-dependent data, especially when running on neuromorphic hardware.

I have a simple autoencoder written in Keras:

def autoencoder_LSTM(X):
    inputs = Input(shape=(X.shape[1], X.shape[2]))
    L1 = LSTM(32, activation='relu', return_sequences=True,
    L2 = LSTM(8, activation='relu', return_sequences=False)(L1)
    L3 = RepeatVector(X.shape[1])(L2)
    L4 = LSTM(8, activation='relu', return_sequences=True)(L3)
    L5 = LSTM(32, activation='relu', return_sequences=True)(L4)
    output = TimeDistributed(Dense(X.shape[2]))(L5)
    model = Model(inputs=inputs, outputs=output)
    return model

I was thinking about rewriting this model to nengo and simulating it in nengo-dl (eg. with LIF neurons) by repeating the input/target data for a number of timesteps or running it directly through nengo-loihi.

Is such implementation possible in Nengo and are RNN layers (especially LSTMs) supported for SNN conversion? I would appreciate a comment about this idea.


Hi Bartek,
For doing time-series modelling with Nengo DL, the best place to start is likely with the Legendre Memory Unit (LMU) tutorial at https://www.nengo.ai/nengo-dl/examples/lmu.html. This is a recurrent network architecture that performs as well or better than LSTMs on most tasks, and is well-suited to a spiking implementation. I don’t think we have any off-the-shelf examples of a SNN implementation of an LSTM, so there’d likely be some work involved in getting it working, particularly with respect to the gating mechanisms. So, I’d start with the LMU and move to the LSTM afterwards if necessary.

Once you’ve had a look, feel free to let us know if you have any additional questions!

Hi Peter,

Thanks for helping. I’ll give it a try and let you know in case of further questions.