Hello everyone, I’m new to nengo. When I was recently looking at the code of a path integrator there’s a line of code involving an encoder that I couldn’t quite grasp the principle behind it.After reading the paper, I roughly understand that it’s related to the preferred direction of neurons, but I still don’t quite grasp the specific principles. Also, how are the eval_points obtained, and what is the principle behind it?
path_int = nengo.Ensemble(n_neurons=population_size, dimensions=glength+2, radius=1, encoders=directions, eval_points=samples)
Besides, I’m also confused about the encoding of ssp space in SSP-SLAM, particularly why the encoder is calculated in that way.
encoders = ssp_space.sample_grid_encoders(n_gcs)
self.output = nengo.Ensemble(n_gcs,d, encoders = encoders,intercepts=nengo.dists.Choice([sparsity_to_x_intercept(d, 0.1)]) ,label=label+'_output')
In addition, I’ve been trying to understand the computation method of the function func
in
nengo.Connection(A, B, function=func)
I suspect that func
might take the decoded information from A as its input, perform some computation, and then encode the result to input into B. If that’s the case, do neuron ensembles like A and B merely serve as representational roles, where the output is simply a transformation of the input. What’s the significance of this? In engineering projects, wouldn’t it be more cumbersome to use neural networks instead? If not, then how do connections between neurons simulate any arbitrary function?
To summarize, my main doubts lie in two points:
- Given a custom vector space, how do we encode an input vector into a neuron ensemble?
- For the function associated with the connection between neuron ensembles, does it involve decoding the input, performing computations, and then encoding the result, or is the function directly approximated as the connection weights between neurons? What are the advantages of doing it in either way?"