SNNs and LMU with Nengo-DL: is it always a spiking implementation?

Greetings!

I am interested in time-series forecasting using Spiking Neural Networks. For this purpose, in the past months I have played around with the LMU implementation you describe here.

In this implementation the following component is used:

 self.h = nengo_dl.TensorNode(tf.nn.tanh, shape_in=(units,), pass_time=False)

To my understanding, this is a non-spiking implementation, since the neurons are non-spiking. Correct me if I am wrong. However, from what I have understood, when we are in the “testing phase” and we use the nengo_dl.Simulator, we switch context from rate to spiking. Is this true?

I have also seen that in another thread, the user has a similar use case. Then he is asking about “how to implement a spiking version of the LMU in nengoDL”. The recommended approach for his goal is to “use a Nengo network from the start (as in the NengoDL LMU example). However, as I mentioned before, you’ll need to replace the self.h population with a nengo.Ensemble in order to make it spiking.” This however is in contraddiction with the NengoDL LMU example, since the h component is not an Ensemble but a TensorNode.

What is a the definition of a SNN implemented with nengo-dl?
What are the differences between train and test phases? I assume it’s totally in line with my goal of “ts forecasting using SNN” to train the network in a rate context, but evaluate it in the spiking one, right?

I am bit lost to be frank, as I do not understand what I need to do in order to perform time-series forecasting using SNN LMU. I would really appreciate some guidance over these general topics.
Thanks really a lot! :slight_smile: