as I didn’t find any example about how to work with event-based data with Nengo (and in particular, also with Nengo-Loihi), I decided to open a new post to collect suggestions as well as my thoughs about this topic.
I am working on some DVS datasets and I started from the Keras-to-Loihi example. I have several questions about it:
How to represent event data in Nengo ?
In the Keras-to-Loihi example, we use images which do not have temporal dimension so the train images have shape
(batch_size, 1, data_size), where in the example
data_sizeis 28 * 28 * 1, while the labels have shape
(batch_size, 1, 1). The input shape of the model is
(28, 28, 1).
The shape of event data I am using is
(128, 128, 2)(in my case I have two channels for polarity) and for each sample I am considering 200 frames / bins obtained by applying binning of the events with a window of a given length (e.g. 100 events or in alternative, we can take the events in a time window of 10 ms to make a bin).
So, each frame has shape (128, 128, 2) and in a sense we can see each sample as a 4D tensor of shape (200, 128, 128, 2).
Considering the representation used by Nengo
(batch_size, timesteps, data_size), my idea is to create batches of shape
(batch_size, 200, 128 * 128 * 2)for training my network.
Is this approach correct ? Which are other modifications I need to do to the Keras-to-Loihi example to use event data ? Do you have any further suggestions ?
Moreover, I saw in the source code of the nengo-dl
temporal_model. Could you please explain me if I need to use it and how to use it ?
Furthermore, with the above representation, may I also use
synapseduring training ?
I also thought to other two ways to represent event data:
- merge the time steps / bins into the batch dimension: in this case batches will have shape
(batch_size * 200, 1, 128 * 128 * 2)and the network input shape will remain the same
(128, 128, 2).
- merge the time steps / bins into the channel dimension: in this case batches will have shape
(batch_size, 1, 128 * 128 * 2 * 200), however in this case the network input shape will be
(128, 128, 2 * 200).
In both cases we have
1in the time dimension so we need to replicate each input sample for some time steps during the evaluation of the network, as done in the Keras-to-Loihi example. However, this may not be very efficient to replicate each sample.
How to deal with this ?
By the way, which do you think is the best way to represent event data ?
Should I run the first layer off-chip ?
In the Keras-to-Loihi example, the first layer is run off-chip to convert pixels into spikes ? Should I do the same when working with event data ?
How to present input samples to Loihi ?
The Keras-to-Loihi example makes use of the
PresentInputobject to feed test samples into the model. Can I use it with event data (or better which is the best way to use it wrt the above event data representations ?) ?.
I made some experiments with it and noticed that when sending the events one-by-one, however I noticed that it is pretty slow. Do you agree with me ?
Moreover, I am also wondering if there a better way to fed data in input to the SNN when working with Loihi.
I also saw that in the dvs087 branch of nengo-loihi, that you added some code to work with DVS files / camera. Does it work with the latest (development) version of nengo-loihi and NxSDK ?
I think it would be very cool if you can integrate it in the latest release and add some examples about it.
Finally, I would like to ask you if you can add some code examples for Nengo-DL/Loihi about how to work with event-based cameras / data, since I believe that this would be very useful for other users too.