I’m not aware of anybody who’s tried that in Nengo.
Do you know if the scanline encoding is being done off-chip? That’s my impression based on the figure, that the scanline encoder is what turns the image into spikes, which are then sent to the chip for the output transform. If that’s the case, then I the efficacy of it on Loihi would really depend on how efficient the scanline encoding is on the CPU. If it takes significant CPU processing, then it might be better to do a pixel-wise encoding into spikes. If it can be done very efficiently (which could be the case if the number of scanlines is much less than the number of pixels), then maybe there would be some advantage to this.
(Currently in Nengo Loihi, the whole process of encoding images to spikes happens off-chip, but there might be ways to do it on chip. For example, you could have on-chip neurons for which you set the biases based on the pixel intensities in your image. While the chip wasn’t designed for fast configuration of biases, with static images you would just have to change these biases when you change the input image, so it might not be too bad.)
One thing that concerns me is that very little processing is being done on Loihi with the scanline encoding model shown in this figure, and it’s not exactly clear how you would scale this up to more interesting datasets. My intuition is that the scanline encoding would work well for simple binary images like MNIST, but not so well for more realistic images.