You are absolutely correct. I did neglect to fully explain one crucial part of the decoder solving (because it can be done in a couple of ways) step, and that is the process by which the inputs are derived.
In order to understand this process, I first need to explain how information flows through the neuron (i.e., how an input goes into a neuron to generate some output). In Nengo (and the NEF), as well as in other neural network architectures, the computation follows a similar schema:
input → input weights → neuron non-linearity → output weights → output
Before continuing, I should note that for multi-layer networks, some neural networks combine the input and output weights into one “weight matrix”:
input → L1 input weights → L1 non-linearity → L1 output weights → L2 input weights → L2 non-linearity → L2 output weights → output
input → L1 input weights → L1 non-linearity → L1-L2 weight matrix → L2 non-linearity → L2 output weights → output
In the default mode of operation (i.e., using the NEF algorithm), the input weights are called “encoders”, and the output weights are called “decoders”. What’s different about Nengo compared to other neural network training software is that Nengo uses the NEF algorithm to solve for a set of decoders that “compute” a user-determined function, whereas other neural network training software (e.g., TensorFlow) use a learning algorithm to tune these weights. Each method has their own advantages and disadvantages but that’s beyond the scope of the discussion here.
To summarize, in Nengo, an input signal is multiplied with a set of encoders (in Nengo encoders are randomly generated by default) that “convert” the input signal into current that is fed into the neuron. This input current is then put through the neuron activation function to determine what the neuron’s activity (for rate neurons this would be the neuron firing rate) would be for said given input. The neuron activity is then multiplied with the decoders (solved through the connection function) to produce an output signal.
As for the actual process of generating the function inputs (to the decoder solver), there are two ways you can do it. The naive way is to assume that any valid input signal presented to the neurons will map on to one of the possible firing rates on the neuron’s activation curve. With this assumption, you can then use the entire activation curve (defined between some range of input currents) as the input data to the function that the decoder solver must solve.
The naive approach, however, has the disadvantage that it generates a lot of data (depending on the resolution of the neuron’s activation curve). Nengo thus uses an alternative approach, and that is to generate a random set of input values (known as evaluation points), which are then multiplied by the encoders, and put through the neuron’s activation function to obtain the respective neuron’s firing rate (which can then be used as the function’s input data). If you look at the API for the
nengo.Connection object, this is what the
eval_points parameter specifies. Note that evaluation points can also be specified on the
nengo.Ensemble object which will be used if multiple output connections are created from said ensemble. By default, the evaluation points are chosen from the unit hypersphere (the hypersphere where the radius is 1) determined by the ensemble’s dimensionality.
Once again, to summarize, for each ensemble, Nengo creates a set of evaluation points. These evaluation points are multiplied by the ensemble’s encoder, and then used to compute the corresponding activities for each evaluation point. These activities are then used as input data for the solver to solve for the connection’s decoders.