I spoke to the NengoDL and KerasLMU devs and from what I understand, trying to convert a KerasLMU model to a SNN using the NengoDL converter is a difficult process due to the differences between how TF (which is used to run KerasLMU) and Nengo represent time.
In order to “convert” a KerasLMU network into a spiking version, you’ll probably need to:
- Train the KerasLMU network in TF
- Save the network connection weights after training
- Construct an equivalent spiking network in NengoDL
- Load the network connection weights into the NengoDL model to run as a spiking network.
I haven’t done this process myself, but @travis.dewolf has and may have some code to share with you to facilitate this process.
Another approach to training a spiking LMU network is to use a Nengo network from the start (as in the NengoDL LMU example). However, as I mentioned before, you’ll need to replace the
self.h population with a
nengo.Ensemble in order to make it spiking. If you take this approach, you’ll need to manually specify the neuron parameters to emulate the defaults used by TF. I found that the following (a spiking version of the tanh neuron) works reasonably well:
self.h = nengo.Ensemble(units,
The gain and bias values are 1 and 0 by default in TF, and the
tau_ref value limits the firing rate of the neuron to about 1, which is also default in TF. You can tweak these values to see if they improve your results. You can also substitute the neuron model with
nengo.SpikingRectifiedLinear to use other neuron types besides a spiking Tanh neuron.