Scanline Image Encoding?

I was curious if anyone has looked into, or tried, incorporating scanline encoding for Nengo Loihi image classification inputs. While reading about Intel’s Loihi, I came across their method of scanline encoding for spiking image classification. Attached is a visual from their presentation. As best I can understand it, instead of basically flattening the 2d array and duplicating it over time, pixels from new scanlines are generated and fed in a series. They seem to avoid using any convolutions, or data redundancy due to identical repetition of the image.

Thank you!

Hi Jasper,

I’m not aware of anybody who’s tried that in Nengo.

Do you know if the scanline encoding is being done off-chip? That’s my impression based on the figure, that the scanline encoder is what turns the image into spikes, which are then sent to the chip for the output transform. If that’s the case, then I the efficacy of it on Loihi would really depend on how efficient the scanline encoding is on the CPU. If it takes significant CPU processing, then it might be better to do a pixel-wise encoding into spikes. If it can be done very efficiently (which could be the case if the number of scanlines is much less than the number of pixels), then maybe there would be some advantage to this.

(Currently in Nengo Loihi, the whole process of encoding images to spikes happens off-chip, but there might be ways to do it on chip. For example, you could have on-chip neurons for which you set the biases based on the pixel intensities in your image. While the chip wasn’t designed for fast configuration of biases, with static images you would just have to change these biases when you change the input image, so it might not be too bad.)

One thing that concerns me is that very little processing is being done on Loihi with the scanline encoding model shown in this figure, and it’s not exactly clear how you would scale this up to more interesting datasets. My intuition is that the scanline encoding would work well for simple binary images like MNIST, but not so well for more realistic images.


Exactly my impression too- I have no additional info about this specific use case, but I too suspect that the encoding is just being done off-chip, and that I doubt it’d work for more complex classification tasks. I may explore it though, and of course update this post if I get any interesting results. If a sparse # of scanlines significantly reduces the data size while retaining accuracy then that’s useful, but the biologically-inspired component for scanlines doesn’t seem as compelling.Thank you so much for the prompt reply!


The algorithm, (classifier), reminds me the ‘random forest algorithm’ and the instructive material published on the net by “Nando de Freitas”. Rapid Miner used to offer also that algorithm under the global package.