That is correct. Nengo combines the encoders, radius and gains together to get the “scaled encoders”. This is done to reduce the amount of weights that have to be stored.
I tested your code and have several notes:
- If you are not modifying the encoders of
post
, then simply setting theseed
should be sufficient to create an ensemble that has the same parameters as the initial network. - If you are modifying the encoders of
post
, then in addition to setting the encoder values, you’ll also need to set the gain and bias values to match the values from the original network, like so:
post_new = nengo.Ensemble(
n, d, encoders=trained_encoders,
gain=sim.data[post].gain, bias=sim.data[post].bias,
seed=seed)
- Creating a connection from the
pre.neurons
object is technically correct, however, there is another method for creating a connection with the trained decoders that preserve the original style of connection from an ensemble. This alternative method is to use theNoSolver
solver and provide it the decoder weights from above, like so:
conn = nengo.Connection(
pre, post, solver=nengo.solvers.NoSolver(trained_weights.T), seed=seed)
This is correct. The gains are combined with the encoders here in the code, and it only applies to connections where the post object is a nengo.Ensemble
. Note that for connections to neuron objects, the neuron gains are multiplicatively added here in the code (it’s a different logic path in the connection builder code). This different ways in how the gains are handled between ensemble connections and neuron connections does cause an issue when saving and loading weights (see this github issue), so we are aware of this. I did attempt a fix, but there are other issues I’m not fully considering that is causing the tests to fail (and I do not currently have the time to address those other issues)
Note that the compensation for the gains (i.e., removing them from the weights) need only be done if your original connection is to an ensemble. If the original connection is to a neurons object you do not have to compensate for the gains. E.g.,
# Original network
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d, seed=seed)
# a direct neuron-to-neuron connection
conn = nengo.Connection(pre.neurons, post.neurons, transform=np.random.random(...), seed=seed)
...
probe_weights = nengo.Probe(conn, "weights")
...
trained_weights = sim.data[probe_weights]
Loaded network:
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d, seed=seed)
# a direct neuron-to-neuron connection
conn = nengo.Connection(pre.neurons, post.neurons, transform=trained_weights , seed=seed)
Regarding loading the weights in a new network, the NoSolver
method does not work with full (neuron-to-neuron) connections. Instead you will need to do what you have done (i.e., create a connection between the neurons and manually set the weights with the transform
parameter.