Working with event data in Nengo and Loihi

Hi all,
as I didn’t find any example about how to work with event-based data with Nengo (and in particular, also with Nengo-Loihi), I decided to open a new post to collect suggestions as well as my thoughs about this topic.
I am working on some DVS datasets and I started from the Keras-to-Loihi example. I have several questions about it:

  1. How to represent event data in Nengo ?
    In the Keras-to-Loihi example, we use images which do not have temporal dimension so the train images have shape (batch_size, 1, data_size), where in the example data_size is 28 * 28 * 1, while the labels have shape (batch_size, 1, 1). The input shape of the model is (28, 28, 1).
    The shape of event data I am using is (128, 128, 2) (in my case I have two channels for polarity) and for each sample I am considering 200 frames / bins obtained by applying binning of the events with a window of a given length (e.g. 100 events or in alternative, we can take the events in a time window of 10 ms to make a bin).
    So, each frame has shape (128, 128, 2) and in a sense we can see each sample as a 4D tensor of shape (200, 128, 128, 2).
    Considering the representation used by Nengo (batch_size, timesteps, data_size), my idea is to create batches of shape (batch_size, 200, 128 * 128 * 2) for training my network.
    Is this approach correct ? Which are other modifications I need to do to the Keras-to-Loihi example to use event data ? Do you have any further suggestions ?
    Moreover, I saw in the source code of the nengo-dl Converter the parameter temporal_model. Could you please explain me if I need to use it and how to use it ?
    Furthermore, with the above representation, may I also use scale_firing_rates and synapse during training ?
    I also thought to other two ways to represent event data:
  • merge the time steps / bins into the batch dimension: in this case batches will have shape (batch_size * 200, 1, 128 * 128 * 2) and the network input shape will remain the same (128, 128, 2).
  • merge the time steps / bins into the channel dimension: in this case batches will have shape (batch_size, 1, 128 * 128 * 2 * 200), however in this case the network input shape will be (128, 128, 2 * 200).
    In both cases we have 1 in the time dimension so we need to replicate each input sample for some time steps during the evaluation of the network, as done in the Keras-to-Loihi example. However, this may not be very efficient to replicate each sample.
    How to deal with this ?
    By the way, which do you think is the best way to represent event data ?
  1. Should I run the first layer off-chip ?
    In the Keras-to-Loihi example, the first layer is run off-chip to convert pixels into spikes ? Should I do the same when working with event data ?

  2. How to present input samples to Loihi ?
    The Keras-to-Loihi example makes use of the PresentInput object to feed test samples into the model. Can I use it with event data (or better which is the best way to use it wrt the above event data representations ?) ?.
    I made some experiments with it and noticed that when sending the events one-by-one, however I noticed that it is pretty slow. Do you agree with me ?
    Moreover, I am also wondering if there a better way to fed data in input to the SNN when working with Loihi.
    I also saw that in the dvs087 branch of nengo-loihi, that you added some code to work with DVS files / camera. Does it work with the latest (development) version of nengo-loihi and NxSDK ?
    I think it would be very cool if you can integrate it in the latest release and add some examples about it.

Finally, I would like to ask you if you can add some code examples for Nengo-DL/Loihi about how to work with event-based cameras / data, since I believe that this would be very useful for other users too.

Best.

We actually did considerable work on interfacing a DVS (both from file and from a live camera) with Loihi; unfortunately, the project got cut and all the work was never merged into the master branch of NengoLoihi. This is what you found in the dvs087 branch. It currently does not work with the latest version of NengoLoihi. I’ll put this on the list to discuss at our project review meeting next week, to see if it’s something we can do sooner rather than later (it’s currently on our TODO list, but fairly far down).

If you’re interested in trying to get the code working yourself, the best place to start is probably this commit, which creates a Node to read data from a .aedat file and send it to the board. You can see here that the special builder for this node creates a SpikeInput and adds the spikes to it directly from the file, which is the most efficient way to get spikes to the board.

You could also make your own solution by having a Node that outputs the correct “frame” each timestep. So in your node function, you’d use the input time t to find all pixels that should be spiking at that time, and put them together into a “frame (image)” that the node can return. This is essentially what we do here. You can then connect that node do an Ensemble, and set things up so a neuron in the ensemble will spike any time one of those input pixels is “on”. This would be less performant than the solution we implemented, though, because in our solution, we avoid this intermediate step of turning events into frames, only to turn them back into events.

Hi @Eric,
thank you very much for your feedback and suggestions.
Currently my first goal is to better understand how to train a CNN model on event data using Nengo-DL. By the way, do you have suggestions about how to build a temporal model, convert it using nengo-dl.Converter and deploy it to Loihi ?
Then I plan to experiment on the code in the dvs087 branch.

I will keep you updated about my work.
In the meanwhile, feel free to add further notes, suggestions or code snippets about the other questions of my first post, if you can.

Best regards and thanks again for your reply.

Neither Keras nor NengoDL support event-based processing. So for either of those platforms, you’ve got to turn the events into a “frame” that you pass as your input image. See the code in DVSFileChipNode.update for an outline of how to turn spikes into a frame. If you combine that with the spike loading method from that commit, you should be able to get training in NengoDL fairly quickly. If you want to train in Keras/TensorFlow, you could either adapt that code for TensorFlow, or just keep making the frames in Numpy and feed them in that way. (I’m not sure which would be more efficient. The advantage of doing it in TensorFlow is there should be less data to transfer to the GPU, assuming your event data is reasonably sparse and the amount of event data per timestep is less than the amount of data needed to represent a frame. Then “assembling” the frame would be done on the GPU, and it might be able to take advantage of some parallelism there, though sparse data is typically harder for GPUs to handle efficiently. So I don’t know which would be better; you’d have to try them both. You could even represent the frame as a sparse matrix rather than a dense one, either in Numpy or in TensorFlow, but again I’m not sure if that would end up performing better or not.)

Hi @Eric,
any news about the NengoLoihi integration with DVS / event-based data ?

We’ve moved it up our backlog, and should get around to it next week. I’ll let you know when it’s ready (and if you don’t hear anything by the end of next week, feel free to ask again).

Cool ! Thank you very much in advance for your help and time.

Hi @spiker,

I’ve updated the file-based DVS code; it’s in a PR here. It hasn’t been reviewed or merged yet, so you’ll have to use that specific branch, and things may still change slightly. I have tested it though, and it should all work. There is an example notebook added as part of that PR that shows how to use the DVSFileChipNode. If you run into any problems, please post them in the PR, because we may be able to fix them before we merge it.

Hi @Eric,
thank you very much for your support and PR.
I will try the code in the example notebook in the next days and keep you updated about my results.

Best.

Hi @Eric,
just some preliminary comments about your PR.

I tested the example notebook on both the Loihi hardware and simulator, and it works well (I didn’t find any issue).
Some considerations / suggestions about the DVS code:

  • As for the DVSFileChipNode, it assumes that the DVS event data has height = 180 and width = 240. I would suggest to add height and width parameters to the constructor in order to allow the user to be able to set them according to the height and width of the event data. I can update the code and make a PR, if you agree with me.

  • I would also suggest to add another example that shows how to train a CNN using Nengo-DL on an event-based dataset (like N-MNIST), convert the model into a SNN and run it on Loihi using the DVSFileChipNode. I think that the Keras-to-Loihi example may be a good starting point (the idea is to use N-MNIST instead of the MNIST dataset and DVSFileChipNode instead of PresentInput). I believe it would be very useful also for other users.

  • I plan to use the file-based DVS code in a multi-class classification problem. In particular, I would like to know if it is possible/better to save all the events related to the samples to classify (belonging to different classes) into a single file or saving the events of each sample into a different file and perform the inference one sample at time. it would be great if you can give me some suggestions about how to use DVSFileChipNode in a classification problem with multiple samples (each one consisting of several events).

By the way great work and, of course, thanks for your support.

Best regards.

  1. There is definitely an argument for allowing other widths/heights. The reason we didn’t is that the vast majority of DVS/Davis sensors are 240 x 180, and the way that NxSDK does DVS input is specifically designed for this size. So you could do file-based input with another size, but I’m not sure if it would be possible to do live DVS with other sizes. That said, I would be OK to add configurable size for the DVSFileChipNode, and just note in the docstring that this wouldn’t work for live DVS.
  2. I agree, such an example sounds great. Unfortunately, I don’t have time to work on that in the near future, but I can add it to our backlog.
  3. The way the DVSFileChipNode is set up, it reads the whole file at the start, and uses NxSDK tools to load them onto the board. This means that it is not possible to e.g. just change the file that a DVSFileChipNode is reading from and re-run the simulation using a different file. So for now, your options are to either a) put all the examples sequentially in one file, or b) re-create the simulator each time using a different file for the node. The first option obviously takes a bit of legwork, though the nengo_loihi.dvs.DVSEvents class can help you here, because you can read a bunch of files, but them together into one DVSEvents, and then use the .write_file method to write a .events file. The second option is easier to implement, but you’ll spend extra time rebuilding the model and re-connecting to the board each time.