Energy consumption/ calculation

Hello nengo community,

Since the main advantage of using a spiking neural network is energy efficiency. I was just wondering is there any way in nengo to calculate the energy consumed by nengo_dl model? Take the example of MNIST: two same models but one is non-spiking such as in Keras and another one spiking one. How can we say that the spiking version consumes less energy? Is there any way to calculate this energy consumption without using any hardware?

Sorry if this question is already answered in some thread in the forum. I couldn’t find a relevant one in the forum.

Thank you in advance for your answer.

Hello @Choozi , you are in luck! Please checkout KerasSpiking and it’s related examples for energy estimation.

1 Like

@zerone thank you very much for the links. :slight_smile: I will look at it and will get back to you in case I have any questions.

@zerone Thank you for sharing the interesting link.

However, this has been done for spiking-keras and I am using nengo_dl. Can we do something similar in nengo_dl? For instance, taking the example of the MNIST (Optimizing a spiking neural network — NengoDL 3.4.1.dev0 docs) tutorial with nengo_dl using LIF neurons. I would like to estimate the energy consumption for that model.

Just to chime in, the energy estimates are only available for KerasSpiking at the moment. It is currently not built into NengoLoihi, Nengo, or NengoDL as an automated process. Although, with NengoLoihi, you can measure the power consumption using the physical Loihi board (NengoLoihi provides an interface to the nxsdk board object in order to do so).

Estimating the energy consumption for a specific Nengo model requires some effort to program, but isn’t overly complex. If you look at the KerasSpiking energy estimation code we are multiplying the number of operations performed per second by an estimate of the amount of energy each operation consumes. This is done for synapses (connections) and for the neurons. The difficulty in computing the energy estimate is the process by which you collect the statistics of the network. For simple networks, you can probably probe every ensemble and connection in the entire network for the duration of the simulation run, but this becomes expensive (in terms of computer memory) for larger models.

Thanks @xchoo for the insights.

@Choozi , I haven’t used KerasSpiking myself. But I guess, any TF model you are building (with TF APIs for subsequent conversion with Nengo-DL), you can just take a copy of it and call the keras_spiking APIs, just like below:

# estimate model energy
energy = keras_spiking.ModelEnergy(model)

where model is your tf.keras.Model() object. As you can see in the example, ReLU neurons have been used for TF model building and then energy estimation, with LIF neurons as well, I believe the energy estimates would be similar, unless @xchoo thinks otherwise.

I haven’t tested it myself, but it seems entirely plausible that using a TF model (or even a TF model created within NengDL) can be usable with the keras_spiking.ModelEnergy() function. It’s definitely a quick and simple suggestion to try. @Choozi, keep us informed! :smiley:


@zerone @xchoo thank your comments/ suggestions. Yes, the spiking-keras only accept TF model. But what confused me that the energy estimates for a trained model and an un-trained model is the same. Shouldn’t it change with training? where the weights get updated?

Hello @Choozi, I am no expert here on energy estimation aspects; but I doubt if trained/un-trained models would affect energy estimation in any sense. Please note that it’s energy estimation and not actual energy profiling (i.e. actual measurement of energy consumption by executing the model on a physical board).

Since it’s just an estimation, I believe KerasSpiking simply estimates the probable number of neuron activation/spiking operations executed on the board (irrespective of trained/un-trained model) and multiplies it with the energy consumed by one such op. Had you had actually executed your trained/un-trained model on a physical board, the energy difference might have been noticeable.

No. The energy estimates for the trained and un-trained model shouldn’t differ by much. Even if the un-trained model is initialized with weight matrices of entirely 0’s. Generally, for all of the hardware that KerasSpiking estimates energy, multiplication by zero is not ignored, and is actually computed (i.e., by literally multiplying by 0), and thus, energy is still used to perform the multiplication.

One thing that might impact the energy use is the amount of spikes being communicated to various parts of the system. If the network is configured in such a way that the network produces no spikes pre-training, and some spikes post-training, then it is entirely plausible that you will see an energy difference between the pre and post-trained networks. However, in NengoDL, most of the layers are initialized with random weights, and random neuron parameters, so a starting network with 0 spikes is typically not the case. Thus, on average, you’ll probably not see a lot of difference between the pre and post-trained networks.