Nengo_spinnaker external device support

Hello all,

I try to implement image segmentation on a spinnaker machine (SpiNN-3 board). The idea is to convert a working CNN to SNN using Nengo. Reading the documentation and forum discussions helped clarify a lot of things but there is still some confusion that I hope you can help me with.

  1. I read here Using Nengo_DL together with Nengo_Spinnaker a similar case and I understand that nengo_spinnaker is not maintained. My question is whether these older versions of nengo and nengo_dl support the ANN to SNN conversion. And in general, how can I learn about the functionality I will lose by working in these older versions?

  2. Since the idea is to connect a DVS to the board via (spinnlink) I was wondering if nengo supports external devices somehow (API). I only see the appropriate functions to simulate the model on spinnaker.

  3. If I use the latest nengo, nengo_dl and tensorflow and manage to convert my CNN to an SNN that runs on a PC, would I be able somehow to retrieve information about the generated SNN architecture, weights and various configurations to manually implement it in Spinnaker using the tools that work with spinnaker? (eg older nengo core)

  4. Reading the paper “Training Spiking Deep Networks for Neuromorphic Hardware” I see that there are some constraints regarding the ANNs that can actually be converted in SNNs. Is there anything else that I need to bare in mind during this conversion?

Thanks for your help,
Vaggelis

Hi @ntouev, and welcome to the Nengo forums. :smiley:

To answer your questions:

Hmmm. Looking at the documentation, probably not. If you look at the NengoDL release notes, you’ll see that the NengoDL converter (that you need to convert the ANN to SNN) was introduced in NengoDL v3.0.0. This in turn requires Nengo v3.0.0, and NengoSpinnaker only supports up to Nengo v2.2.0. :frowning:

This depends entirely on the backend being used. In this case, the NengoSpinnaker backend would need to be modified to support the connection of a DVS to the board (which I don’t think it does). The backend would also take on the job of “translating” or “converting” a Nengo object (maybe one tagged with some metadata?) to the DVS connection. However, as far as I know, this feature does not exist in the NengoSpinnaker backend.

This in theory should be possible. You should be able to obtain all of the necessary weights (connection weights, biases, gains, etc.) from the converted SNN, and if you have a “standard” Nengo model that just loads these weights and parameters, then it should work even in the older versions of Nengo core.

I haven’t done this enough to provide much feedback on this. From my experience, I would say that the firing rates of the neurons are important to keep in mind. You’ll need to have a high enough firing rate so that the network performance is not crap (for SNN’s, information is only transmitted when in a spike, so fewer spikes mean less information), but not too high of a firing rate that the hardware is not overwhelmed. Another additional consideration is the neuron types used on the hardware. Some hardware use modified neuron models (i.e., not exactly the same as the ones in Nengo) that may need to be taken into account during the training process.

Thank you a lot for the detailed response!