Hi all,
I thought It might be a nice to have an overview of how to use the connections from a previously trained network to create one that functions the same. There are multiple post and comments on the forum that touch upon this, but it is still a bit vague to me.
Decoded connection
lets make a network that contains two ensembles called pre and post, both with n neurons and dimension d. They are connected using a decoded connection. (I believe that there are no learning rules that adjust both encoders and decoders at the same time, but it helps to keep this post a bit shorter.)
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d, seed=seed)
# a decoded connection, the Nengo default
conn = nengo.Connection(pre, post, solver=LstsqL2(weights=False), seed=seed)
Decoders and post ensemble encoders are probed like this:
probe_dec = nengo.Probe(conn, "weights")
probe_enc = nengo.Probe(post, "scaled_encoders")
After doing whatever simulation and learning you want to you access the information inside the probes
trained_decoders = sim.data[probe_dec][-1] # probe for the last time point/measurement
trained_scaled_encoders = sim.data[probe_enc][-1]
We cannot, however, just use these values to plug into our new network that we want to create using the connection from the old network. The scaled encoders need to be “unscaled” first. The decoders we can use as is. NOTE: IS THIS CORRECT?
trained_encoders = trained_scaled_encoders * post.radius / sim.model.params[post].gain[:, None]
Now we are ready to construct a new network witht he exact same properties as the one above.
# New model made to copy the model initially build
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d, encoder=trained_encoders, seed=seed)
# a decoded connection, the Nengo default
conn = nengo.Connection(pre.neurons, post, transform=trained_decoders, seed=seed)
Direct ensemble-to-ensemble connection
Lets makea network that contains two ensembles called pre and post, both with n neurons and dimension d. They are connected using a direct connection, so with weights.
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d, seed=seed)
# a direct ensemble-to-ensemble connection
conn = nengo.Connection(pre, post, solver=LstsqL2(weights=True), seed=seed)
The connection weights are probes like this:
probe_weights = nengo.Probe(conn, "weights") # Same argument as for the encoders of the decoded connection, but we are probing something different!
After doing whatever simulation and learning you want to you access the information inside the probes
trained_weights = sim.data[probe_weights][-1] # probe for the last time point/measurement
We cannot, however, just use this values to plug into our new network that we want to create using the connection from the old network. The weights need to be divided by the post neurons gain, as it is multiplied by it in the building process of the new network (NOTE: where in the nengo code does this happen? I cant find it.)
trained_weights = trained_weights / sim.model.params[post].gain[:, None]
Now we are ready to construct a new network witht he exact same properties as the one above.
# New model made to copy the model initially build
pre = nengo.Ensemble(n, d, seed=seed)
post = nengo.Ensemble(n, d,, seed=seed)
# a decoded connection, the Nengo default
conn = nengo.Connection(pre.neurons, post.neurons, transform=trained_weights, seed=seed)
question
Is this correct? I am mostly unsure about what is needed to do after reading out the probes, so the “transformation” of the probed weights/encoders/decoders.