Time series data with Nengo Loihi

Hello,

I am working on an implementation of SNNs (on Nengo Loihi) with real time-series data. Going over some of the examples, I was wondering if there is a way to leverage the fact that my data is actually time dependent.

For example: https://www.nengo.ai/nengo-loihi/examples/mnist-convnet.html. In this example in Cell [5], there is a comment and code on repeating the input data over time as spiking neurons are inherently time-dependent:

# for the test data evaluation we'll be running the network over time
# using spiking neurons, so we need to repeat the input/target data
# for a number of timesteps (based on the presentation_time)
n_steps = int(presentation_time / dt)
test_images = np.tile(test_images[:minibatch_size*2, None, :],
                      (1, n_steps, 1))
test_labels = np.tile(test_labels[:minibatch_size*2, None, None],
                      (1, n_steps, 1))

It makes sense in this example, since MNIST is not time-dependent in any way, that the best way to go about this is repeating the same input data for a set number of steps. In basically every example shown, the data is presented as such. However, with actual time-series data, I was wondering if there is a better way to go about this. Can we leverage the fact that the data is time-dependent by introducing it chronologically across the input timesteps? I would usually say that just presenting all the data at once would be better, but in this specific case, would formatting the input such that it is chronologically introduced be beneficial to the SNN? Any ideas on how to do this?

Thanks so much for the help!
Eric

Hi Eric,

Thank you for your question! It is an interesting question, and I’ll do my best to address it below.

First off, how you want to present the data to your Nengo model really depends on the task you are trying to solve. If the task involves processing past data to make a future decision or action, this can be done in Nengo, but you’ll need some sort of memory (e.g., and integrator, or LMU cell) to hold that information in the network until the appropriate decision making time. If the task doesn’t involve temporal data, then a feed-forward network architecture usually suffices. As for your questions in particular:

Can we leverage the fact that the data is time-dependent by introducing it chronologically across the input timesteps?I would usually say that just presenting all the data at once would be better, but in this specific case, would formatting the input such that it is chronologically introduced be beneficial to the SNN? Any ideas on how to do this?

Yes! It can be beneficial to leverage temporal data in this way. An example of how to do this would be the Legendre Memory Unit (LMU) networks (see here for an example of how to implement an LMU in NengoLoihi). The comparison you made about the difference between a time-series and a static input is like the comparison between the MNIST and Sequential MNIST tasks. In the MNIST task, the network is presented all of the pixels at once, whereas in the Sequential MNIST task, the pixels are flattened into a sequence on individual pixels that are presented to the network one at a time (thus requiring the network to process the temporal data to complete the task). Compared to a network designed to solve the MNIST task, a SNN (with an LMU) designed to solve the Sequential MNIST task can be much smaller, since the network can leverage the recurrence in the memory unit to “reuse” neural resources to solve the task. Having “memory” in a neural network (vs a feed-forward network) also allows it to use past information when training the network. This allows networks to capture information trends in the input data that are only apparent over long timescales, something which feed-forward networks will be hard pressed to replicate.

There is, however, a caveat to processing the data this way. While it saves “space”, it does may take longer to process the data. With the MNIST network, the time it takes to process an image is only the time it takes information to propagate through every single layer. However, with the Sequential MNIST network, the time it takes to process an “image” is the time to stream the entire sequence of pixels to the network in addition to the time it takes for information to propagate through the entire network.

You can find an example implementation of a Sequential MNIST LMU model here. I should note that this is a NengoDL model, so some work will be needed to convert it into a NengoLoihi network.