Why for each connection, complete encoding and decoding are performed between the state space and the neuron space, instead of directly sending the spikes to other neurons?


I am very interested in the encoding and decoding mechanism of nengo. Compared with NEURON, what is unique about nengo? Why is it necessary to perform fully encoding and decoding for each connection in nengo? Instead of sending spikes between neurons like NEURON?


You can send spikes directly from neurons to neurons using direct connections of the form nengo.Connection(pre.neurons, post.neurons, ...). However, it typically won’t be as efficient in space or time. Decoding and encoding scales linearly in the size of the pre and post populations, while sending spike vectors scales with the product of the two ensemble sizes (both in memory requirements for the weights and in the time required to do the matrix-vector multiplies). A nice overview of why this difference is important for hardware implementation can be found in this paper: https://ieeexplore.ieee.org/document/7280390.

Does this mean that the pre and post ensembles are fully connected? Also, when I was reading this articleIncorporating Biologically Realistic Neuron Models into the NEF, I saw this picture

. What is the role of the neuron model mentioned in it? Is the spikes delivered to the neuron model? In the codenengo.LIF , I noticed that J was sent into the cell model as an input variable. What is its physiological significance?

In either case (i.e., whether you connect nengo.Connection(pre.neurons, post.neurons, ...) or nengo.Connection(pre, post, ...) the two are effectively fully-connected. The reason I say “effectively” is because if you are doing a decode followed by an encode, then by linearity those two matrix multiplies are functionally equivalent to a single all-to-all weight matrix multiply. Mathematically, $E(Dx(t)) = Wx(t)$ where $W = ED$ is your weight matrix (the outer product of the encoders and the decoders).

May also be relevant to mention that Nengo does support sparse transforms if you know the connection matrix is sparse and you want to take advantage of that for efficiency (certain hardware such as Loihi can take advantage of this).

For your actual question I’m not sure what you are mean by “the role of the neuron model”? Spikes are delivered to the neuron model, typically after they are filtered by the synapse (or dendritic compartments in the case of more biologically detailed models). In nengo.LIF, the variable J is the current source that is obtained by filtering a weighted summation of incoming spikes over time (as defined by the decoders from the pre population and the encoders of the post population). And so the physiological significance of J here is as a standard current that has been normalized according to the response curve of the LIF neuron.

Great questions by the way! Thanks and welcome.

Thank you a lot! By the way, is the encoding process performed in the neuron model? Is the role of the neuron model to improve nonlinearity? Is the role of the filter to filter noise? Also, in the article, does x represent the input pulse sequence? If J represents current, what does it mean to subtract current and voltage in the code


Hi YL0910,

  • The encoding process is performed to transform a vector space signal into a current that is passed into the neuron model.
  • Because the filter smooths the signal it helps filter noise, it’s the primary reason for applying a filter in your model.
  • Assuming that you’re referring to the x in Eq 1, x is the input vector space signal. alpha * np.dot(e, x) + J_bias results in the input current that is sent to the neuron model.
  • The highlighted equation is integrating the input current for dt seconds, which leads to a change in voltage.

Does that answer your questions?

I got it! Thank you for answering my question!