Has anyone done something like this with spiking neural networks, like for example an autoencoder for the MNIST dataset? And if so, how can something like this be implemented using nengo? Any ideas?
Are you looking for spiking networks or biologically plausibly learned spiking networks?
Sorry for my ignorance, but what would be the difference between those two?
No need to be sorry! I also should have been more welcoming.
Auto-encoders are typically learned via back-prop, which you can use on a spiking neural net using
However, back-prop is not biologically plausible. If you want to learn in a biologically plausible manner, you’re going to need to use something like Feedback Alignment, which is implemented in Nengo, although I can’t remember where right now.
Thanks a lot for the clarification! I’m more interest in making the autoencoder to learn in a biologically plausible manner, any idea if Feedback Alignment can be found in Nengo or in nengo_dl?
Both @tcstewar and @Eric have worked on Feedback Alignment, so they should be able to help you. Could you give us some more details on why you’re interested in learning an auto-encoder in a biologically plausible manner?
I’ve been researching about, what I think is, a really cool idea called the Predictive Vision Model. It was proposed by Filip Piekniewski and in this meta-architecture (as he calls it) the basic unit is something like an autoencoder (or something like an MLP but with a bottleneck). He also states that this meta-architecture could be implemented with, for example, spiking neural networks, that’s why I’m interested in learning an auto-encoder in a biologically plausible manner. The PVM has been tested for Object tracking, but as Piekniewski states, it can be applied to other tasks, other interesting feature is that, since the architecture is a fully recurrent system, it is capable of performing online-learning. Here’s a link where Piekniewski explains more about this architecture:
Sorry it took me so long to get back to you. I was rushing to finish a paper.