Hey everyone,

In the example (Converting a Keras model to a spiking neural network — NengoDL 3.6.1.dev0 docs), when you’re training and predicting using images, the process seems straightforward. However, if you switch to using time-based sensor signals (where you have 2 seconds of data for each class to predict), the situation becomes more complex. In this case, I thought about having a real-time learning process. For example, if I have an input signal that can be a sine wave, a square wave, or a triangular wave, how should I proceed to convert an ANN to SNN and predict these three classes over time in NengoDL?

Why might prediction accuracy be affected? Do I need to consider using more than just a single timestep? For instance, if I require 10 timesteps (or possibly more) to accurately predict each data point over time, how would this impact the prediction process? Let’s break it down: If each timestep is 0.001 seconds (denoted as sim.dt), and I need 10 timesteps to predict each point within 2 seconds of data, the calculation becomes: 10 timesteps x 0.001 seconds per timestep x 2000 timesteps (representing 2 seconds of data) = 20 seconds required to predict the 2 seconds of data. So, the time needed for prediction dramatically increases with this approach. Is that correct?

Another question: if I have like 4 four channels of signals to input. How can I implement correctly the shape input vector (batch_size x nstep x features)?

Thank you so much.