Hi,
I’m currently working on anomaly detection in time series data with LSTM autoencoders.
I was wondering about applying SNNs for this project because their properties might be interesting in time-dependent data, especially when running on neuromorphic hardware.
I have a simple autoencoder written in Keras:
def autoencoder_LSTM(X):
inputs = Input(shape=(X.shape[1], X.shape[2]))
L1 = LSTM(32, activation='relu', return_sequences=True,
kernel_regularizer=regularizers.l2(0.00))(inputs)
L2 = LSTM(8, activation='relu', return_sequences=False)(L1)
L3 = RepeatVector(X.shape[1])(L2)
L4 = LSTM(8, activation='relu', return_sequences=True)(L3)
L5 = LSTM(32, activation='relu', return_sequences=True)(L4)
output = TimeDistributed(Dense(X.shape[2]))(L5)
model = Model(inputs=inputs, outputs=output)
return model
I was thinking about rewriting this model to nengo and simulating it in nengo-dl (eg. with LIF neurons) by repeating the input/target data for a number of timesteps or running it directly through nengo-loihi.
Is such implementation possible in Nengo and are RNN layers (especially LSTMs) supported for SNN conversion? I would appreciate a comment about this idea.
Thanks,
Bartek