Interesting questions! The way that Nengo optimizes connections (whether they have explicit input/output training examples, or if you pass a function) is usually through least-squares minimization. We first determine how all of the neurons in the pre-ensemble respond to the given input, then solve for a least-squares optimal set of weights to give us the outputs we want when the neural responses are weighted by those weights and summed.
I say that we usually use least-squares minimization because the way in which you solve for decoders can be modified; the
Connection object has a
solver parameter to change how decoding weights are solved for. The solvers have the logic to optimize the weights, and can be seen at here. However, it’s kind of difficult to understand that file without knowing a lot of the rest of Nengo, so we have also provided a more minimal description of how Nengo works here; ctrl+f for the
compute_decoder function to see how we solve for decoders.
To answer your questions directly, you can think of this offline optimization as a type of supervised learning, sure. However, it’s an offline learning procedure that results in static weights that do not change during the simulation. You could instead use the PES rule to do supervised learning online if you want that.
However, you don’t have to resort to online learning to modify the optimization process. I think the easiest way to achieve the results you want is to modify the inputs and outputs.
Say you had 5 input / output pairs. If you want some pairs to be more accurately decoded than other pairs, you can add additional instances of those input / output pairs. Since the objective of the optimization procedure is to minimize the total squared error over all input / output pairs, having the same pair twice means that its contribution to to the total error doubles, and so the weights that result from the optimization will be more likely to decode well for that particular input / output pair than for other input / output pairs.
So, for the specific example you gave of weighting the 5 pairs
[0.2, 0, 0.3, 0.1, 0.4], you could do something like:
inputs = np.random.rand(5)
output = np.random.rand(5)
weighted_inputs = 
weighted_output = 
for i, weights in enumerate([2, 0, 3, 1, 4]):
weighted_inputs.extend([inputs[i] for _ in range(weights)])
weighted_outputs.extend([outputs[i] for _ in range(weights)])
To do the general case of a probability distribution, you’ll have to do some thinking about how best to generalize that snippet above, but it’s definitely doable.