Doubts about the implementation of encoders and conversion functions

Hello everyone, I’m new to nengo. When I was recently looking at the code of a path integrator there’s a line of code involving an encoder that I couldn’t quite grasp the principle behind it.After reading the paper, I roughly understand that it’s related to the preferred direction of neurons, but I still don’t quite grasp the specific principles. Also, how are the eval_points obtained, and what is the principle behind it?

path_int = nengo.Ensemble(n_neurons=population_size, dimensions=glength+2, radius=1, encoders=directions, eval_points=samples)

Besides, I’m also confused about the encoding of ssp space in SSP-SLAM, particularly why the encoder is calculated in that way.

 encoders = ssp_space.sample_grid_encoders(n_gcs)
self.output = nengo.Ensemble(n_gcs,d, encoders = encoders,intercepts=nengo.dists.Choice([sparsity_to_x_intercept(d, 0.1)]) ,label=label+'_output')

In addition, I’ve been trying to understand the computation method of the function func in

nengo.Connection(A, B, function=func)

I suspect that func might take the decoded information from A as its input, perform some computation, and then encode the result to input into B. If that’s the case, do neuron ensembles like A and B merely serve as representational roles, where the output is simply a transformation of the input. What’s the significance of this? In engineering projects, wouldn’t it be more cumbersome to use neural networks instead? If not, then how do connections between neurons simulate any arbitrary function?

To summarize, my main doubts lie in two points:

  1. Given a custom vector space, how do we encode an input vector into a neuron ensemble?
  2. For the function associated with the connection between neuron ensembles, does it involve decoding the input, performing computations, and then encoding the result, or is the function directly approximated as the connection weights between neurons? What are the advantages of doing it in either way?"

Hello Jack, thank you for the questions!

Both of your questions are actually pretty related. When we set eval_points and func, we’re actually defining the training data that is used to find the connection weights of a neural network that compute what we want the system to do.

So, we do not decode the output, perform the calculations, and then re-encode the result. Instead, we find a set of connection weights that go directly from one group of neurons (A) to the next group of neurons (B). And we optimize those connection weights to be as close as possible to the desired function. This is fairly similar to a normal neural network training process, except it’s just using one layer of weights. The eval_points define the input training data (which defaults to a unit hypersphere if you don’t specify it), and then we call func() on each of the eval_points to produce the target values for the training. (or if you have your training data already specified, you can just use that instead).

As for the engineering significance, the main situation where this is useful is if you have neuromorphic hardware (i.e. hardware that is optimized for computing neural networks). Then you can efficiently use such hardware to compute whatever you want. That said, even on normal hardware it’s sometimes nice to have a neural network version of a function since then you can do interesting things like adjust the number of neurons in order to trade off accuracy with computation cost.

Thank you very much for your answer and for your contributions that have made Nengo so powerful and promising. I plan to base my future doctoral research on Nengo. In addition, I have another question. As I mentioned above, regarding a custom encoding space, how can I calculate the encoder to encode the input into the desired format?

Hi Jack. In Nengo a single neuron’s activity, given input \mathbf{x}, is
a_i(\mathbf{x}) = G_i[\alpha_i (\mathbf{e}_i \cdot \mathbf{x}) + \beta_i ]
\alpha_i, \beta_i, and \mathbf{e}_i are parameters that shape the neuron’s tuning curve, and G_i is some neuron model, a non-linear function that defines the relationship between the neuron’s activity and the incoming current. \alpha_i and \beta_i are related to the max_rate and intercepts parameters of nengo.Ensemble. Encoders define what sort of input a particular neuron is sensitive to, hence capturing the tuning curve of a neuron. Input orthogonal to a neuron’s encoder does not elicit a response (above its baseline) while input aligned with the encoder does elicit a strong response.

If you want neurons with particular tuning curves, you need to set both the encoders and even the input appropriately. For example, if you have input, say a head direction \theta, and want neurons with cosine tuning curves (you want to neuron to fire maximal at some \theta_i) then \mathbf{e}_i \cdot \mathbf{x} = \theta_i \cdot \theta won’t produce the desired tuning curve. Instead, you need to project your input to a high-dimensional space, so that \mathbf{e}_i \cdot \mathbf{x} = [\cos(\theta_i), \sin(\theta_i)] \cdot [\cos(\theta), \sin(\theta)].

In SSP-SLAM, a particular projection from 2 or 3 dimensional (or n-dim) space to a high dimensional d>>n space is used. The resulting high-dimensional vectors we call the Spatial Semantic Pointers (SSPs), \phi(\mathbf{x}). The class ssp_space has a method that defines this projection (the encode method). With this input encoding scheme, we can approximate a number of functions/tuning curve shapes with \mathbf{e}_i \cdot \phi(\mathbf{x}) by setting the encoder \mathbf{e}_i in different ways. The method sample_grid_encoders(n_gcs) compute encodersuch that \mathbf{e}_i \cdot \phi(\mathbf{x}) will be gridded over the space \mathbf{x}. If \mathbf{x} is 2d, it wil be a hexagonal grid and so neurons with these encoders will resemble grid cells.

1 Like

Hello, @yaffa. Thank you very much for your patient answer. Now I have a deeper understanding of the encoder with specific tuning curve.