I am working on an implementation of SNNs (on Nengo Loihi) with real time-series data. Going over some of the examples, I was wondering if there is a way to leverage the fact that my data is actually time dependent.
For example: https://www.nengo.ai/nengo-loihi/examples/mnist-convnet.html. In this example in Cell , there is a comment and code on repeating the input data over time as spiking neurons are inherently time-dependent:
# for the test data evaluation we'll be running the network over time # using spiking neurons, so we need to repeat the input/target data # for a number of timesteps (based on the presentation_time) n_steps = int(presentation_time / dt) test_images = np.tile(test_images[:minibatch_size*2, None, :], (1, n_steps, 1)) test_labels = np.tile(test_labels[:minibatch_size*2, None, None], (1, n_steps, 1))
It makes sense in this example, since MNIST is not time-dependent in any way, that the best way to go about this is repeating the same input data for a set number of steps. In basically every example shown, the data is presented as such. However, with actual time-series data, I was wondering if there is a better way to go about this. Can we leverage the fact that the data is time-dependent by introducing it chronologically across the input timesteps? I would usually say that just presenting all the data at once would be better, but in this specific case, would formatting the input such that it is chronologically introduced be beneficial to the SNN? Any ideas on how to do this?
Thanks so much for the help!