Implementing a neural support vector machine in Nengo

Hello everyone,

I’m looking for a way to implement classification of features in a Neural Support Vector Machine with nengo, and so I found this article in the Computational Neuroscience Research Group, but I didn’t understand how the proposed neural network was implemented, more specifically how the recurrent layer was developed and how it was connected with the SVM in scikit-learn to classify the textures.

Could someone explain to me how this network was possibly implemented in nengo?

Thank you so much!

Hi @jone,

I took a look at the paper, and it definitely unclear what the exact structure of the SVM (or the last layer) is. To be honest, the best people to ask are the authors of the paper (@arvoelke is a co-author) and see if you can get the Nengo model code from them.

From my read of the paper, if I were to hazard a guess, it seems like the SVM is an array of leaky integrators, which one integrator per output class. It seems like what the authors did was to train a normal SVM on the frequency data and then use those trained weights as the transform matrix (i.e., the transform parameter of a nengo.Connection) between the second-to-last (the hidden layer) and last (the recurrent) layers. However, the performance of a leaky integrator depends highly on what the recurrent weight is (i.e., how leaky the integrator is), and the paper is light on those details.