Righto, I think I understand what is going on here. So… in Nengo, the input weights to a population of neurons can be split up into “encoders” and the neuron “gains”, or, they can be combined together to form the “scaled encoders”. When you are training the weights in NengoDL, it modifies the combined (scaled encoders) value, and these are the values you are getting when you use sim.data
to probe the model.
The get_nengo_params
function, however, doesn’t give you the values in the network. Rather, it returns the values that you would need to construct an equivalent Nengo network. Since the scaled encoders are the combination of the encoders and gains, the get_nengo_params
function has to “reverse engineer” them to retrieve the encoder values back. The get_nengo_params
function opts to keep the neuron biases as they were during the initial construction of the network (i.e., before the training) as this maintains maximum compatibility with things that you may add to this equivalent Nengo network after you have done this conversion.
You can see from your output (Parameters Layer1 After Training (get_nengo_params)
) that 0.9634403
(encoders) x 40.654743
(gain) = 39.16842
(scaled encoders / gain with sim.data
)