I’m a researcher focused on machine learning, namely in the development of supervised/unsupervised methods for data stream mining (DSM). I recently discovered Nengo and SNNs in general and I think they are impressive. The temporal dynamics of SNNs are perfect for DSM. However, the supervised learning concepts I have been reading about in the literature are significantly different from the ones employed in Nengo. Maybe I’m confused due to terminology, so I ask you for some clarifications. Please, note that all concepts I’m going to employ are related to machine learning concepts.
I have two main questions: one related to encodings and the other one is related to learning algorithms. As I think these questions can be large, I’m going to write two separate topics for clarity. Here we go!
My first question is related to encoding methods. It is well-known that the encoding method employed in SNNs significantly affects the performance (e.g., in terms of accuracy) of the model. For example, I’m going to refer to this recent work:
Petro, B., Kasabov, N., & Kiss, R. M. (2020). Selection and Optimization of Temporal Spike Encoding Methods for Spiking Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 31(2), 358–370. https://doi.org/10.1109/TNNLS.2019.2906158
Here, the authors mention three main types of encoding methods: firing rate, population rank coding and temporal coding (although they only focus on the latter). Then, they analize the performance of different encoding methods, namely Threshold-based representation (TBR), Step-forward encoding (SF), Moving window (MW) and Ben’s Spiker Algorithm (BSA).
In which of these three types the NEF belongs to? I guess it belongs to the population coding as each neuron of an Ensemble generates a different spike train according to its configuration, which also depends on the domain of the input data.
One of the oldest population-based enconding method I’ve seen so far are the Gaussian Receptive Fields (GRF) method. As the encoding of the NEF seems to be more or less similar, is it possible to implement it in Nengo? Here, neurons in the GRF are defined as gaussian distribution instead of LIF neurons. I guess it is not possible via built-in functionality, but it can be done by means of a custom Node function.
Is it possible to simulate the others encoding types in Nengo? For example, the 4 methods presents above are not based in neurons for generating the spike trains. I think it is possible but I have to implement a custom node that contains the coding method.
For a custom encoding method, the result of this encoding should be directly passed to the
.neuronsfield of an Ensemble in order to avoid the NEF encoding. However, can the NEF correctly decode the output generated by this custom method? I think is possible, but is not necessary as DSM’s objective is to adjust the synaptic weights according to the errors made with respect to the input data.
Please, correct me if I am wrong with any of these concepts.