I am building a SNN model using Nengo to perform time-series forecast. The main reason is that I am interested in the energy efficiency of SNNs, while I aim to achieve a decent enough level of accuracy with my SNN model compared to a more traditional ANN for the very same task (which can be a 1D/2D CNN or LSTM).

My research doesn’t involve hardware, since it’s a bit out of scope given my current goal and study.

It would be very nice if I could prove that my SNN model build using NengoDL or Keras-Spking consumes (much) less than a similar implementation in terms of accuracy using a regular ANN. Otherwise, I’d be basing on the sole assumption that SNNs are more efficient since not every neuron is fired, so the energy consumed is by definition less or equal than a regular ANN with the same properties (number of neurons, etc). What do you think about this last statement? Is it enough or a more rigorous proof should be brought?

In regards to proving this in a more empirical way, I am very interested in what described in this Example: Estimating model energy. My only concern is this example focuses mostly on spiking-hardware-based implementation. Do you think it could be relevant to measure the energy consumption of two models (SNN and non-SNN) with this approach without considering any spiking hardware?