On the application of supervised learning in Nengo for data stream mining. Q1: encoding methods

Hello everyone.

I’m a researcher focused on machine learning, namely in the development of supervised/unsupervised methods for data stream mining (DSM). I recently discovered Nengo and SNNs in general and I think they are impressive. The temporal dynamics of SNNs are perfect for DSM. However, the supervised learning concepts I have been reading about in the literature are significantly different from the ones employed in Nengo. Maybe I’m confused due to terminology, so I ask you for some clarifications. Please, note that all concepts I’m going to employ are related to machine learning concepts.

I have two main questions: one related to encodings and the other one is related to learning algorithms. As I think these questions can be large, I’m going to write two separate topics for clarity. Here we go!

My first question is related to encoding methods. It is well-known that the encoding method employed in SNNs significantly affects the performance (e.g., in terms of accuracy) of the model. For example, I’m going to refer to this recent work:

Petro, B., Kasabov, N., & Kiss, R. M. (2020). Selection and Optimization of Temporal Spike Encoding Methods for Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(2), 358–370. https://doi.org/10.1109/TNNLS.2019.2906158

Here, the authors mention three main types of encoding methods: firing rate, population rank coding and temporal coding (although they only focus on the latter). Then, they analize the performance of different encoding methods, namely Threshold-based representation (TBR), Step-forward encoding (SF), Moving window (MW) and Ben’s Spiker Algorithm (BSA).

  1. In which of these three types the NEF belongs to? I guess it belongs to the population coding as each neuron of an Ensemble generates a different spike train according to its configuration, which also depends on the domain of the input data.

  2. One of the oldest population-based enconding method I’ve seen so far are the Gaussian Receptive Fields (GRF) method. As the encoding of the NEF seems to be more or less similar, is it possible to implement it in Nengo? Here, neurons in the GRF are defined as gaussian distribution instead of LIF neurons. I guess it is not possible via built-in functionality, but it can be done by means of a custom Node function.

  3. Is it possible to simulate the others encoding types in Nengo? For example, the 4 methods presents above are not based in neurons for generating the spike trains. I think it is possible but I have to implement a custom node that contains the coding method.

  4. For a custom encoding method, the result of this encoding should be directly passed to the .neurons field of an Ensemble in order to avoid the NEF encoding. However, can the NEF correctly decode the output generated by this custom method? I think is possible, but is not necessary as DSM’s objective is to adjust the synaptic weights according to the errors made with respect to the input data.

Please, correct me if I am wrong with any of these concepts.
Thanks!

I haven’t looked into the details of the referenced work but everything that you said sounds right to me. You should be able to use a custom node and connect directly to .neurons in order to encode the input with whatever logic that you want (note you can also make your node output stateful by making it a callable class). Whether it can also be decoded will really depend on a lot of factors so we’d have to see some specific details to start figuring that out. The other details that I think will need to be figured out is how to express the custom learning rule that you are interested in using in terms of the available signals. Let us know how it goes!

Thanks @arvoelke for your response. It looks like I’m on the right way :grin:. About the decoding method, first I’ll try to solve the enconding and learning parts.

About the custom learning methods. I’m going to write another post because it a longer question that this one. It’s an interesting topic.

I will let you know about my progress!
Cheers.