Saving connection weights

How can I save connection weights that were learned using PES and use them in future runs?

@nrofis, you can save the connection weights learned with the PES learning rule using either of the methods below:

  • Using a nengo.Probe:
with model:
    nengo.Probe(conn, "weights")
  • Accessing the signal value directly:
with nengo.Simulator(model) as sim:

I discuss the use of these two methods in this forum post.

Thank you @xchoo ! But how can I load the weights later?

I think this post may be able to help you: How to store the weights and reproduce them in the next run? - General Discussion - Nengo forum

Thank you @YL0910 . I saw that thread. I’m not sure what is NengoBio and as you mentioned there, I just need to save/load the weights of one connection with PES.

You can use the following code to get the connection weights between neurons:

  weights =[conn_1].weights

Loading weights can be done in this way, but be careful that the matrix is of the correct size:

conn = nengo.Connections (ens_a, ens_b, transform = weights)

Hope my answer can help you! :grinning:

In the forum post I linked, I do discuss how to load the weights into a Nengo model after it’s been saved.
A quick summary of how to do this is to:

  1. Set seeds for the ensembles that the saved weights are associated with
  2. Create a model and run the simulation (round 1)
  3. Save the weights at the end of the first simulation
  4. Create a model (identical to the first, with the same seeds), and with the saved weights
  5. Run the simulation (round 2)

In the second model, it is not necessary to have learning enabled.

Identifying which ensembles to seed requires a little knowledge of the learning rule (and the NEF) you are using in the first model. Taking the PES learning rule as an example, the PES description states that the PES learning rule modifies the decoders on the connection. The decoders are applied to the activity of the “pre” population, thus only the “pre” population needs to be seeded (although, you can seed both ensembles if you are unsure).

Another thing that you will have to take care of is how to use the trained weights in the second model.
For the PES learning rule, the weights you obtain from the connection are the decoders for the connection. To use the weights in the second model, you’ll need to make a connection from the “pre” population’s neurons. This is to bypass the default decoders (if you connect to the regular ensemble, Nengo will solve for decoders for you), and used the learned decoders.

Note that if you use a learning rule with modifies the entire weight matrix (i.e., if you use something that is not the PES learning rule), you’ll need to make a neuron to neuron connection. And in that case, you may also need to scale the weights by the “post” population’s neuron gains (see the forum post I linked above, it has an example of this).

Here’s some example code that trains a network with the PES learning rule, saves the decoders, then uses them in a second static (no-learning) model:

import matplotlib.pyplot as plt
import nengo
import numpy as np

seed = 10  # Define a seed for the seeded ensemble
simtime = 5  # Define simulation runtime, to use for the weights probe

# Model with learning
with nengo.Network() as model:
    inp = nengo.Node(lambda t: np.sin(2 * t * np.pi))

    # Set seeds for ensembles that the saved weights are associated with
    pre = nengo.Ensemble(30, 1, seed=seed)
    post = nengo.Ensemble(30, 1)
    nengo.Connection(inp, pre)

    # Set up learning rule
    conn = nengo.Connection(pre, post, function=lambda x: 0)
    conn.learning_rule_type = nengo.PES()

    # Create error population and connections
    err = nengo.Node(size_in=1)
    nengo.Connection(post, err)
    nengo.Connection(pre, err, transform=-1, function=lambda x: -x)
    nengo.Connection(err, conn.learning_rule)

    # Add probes
    p_in = nengo.Probe(inp)
    p_out = nengo.Probe(post, synapse=0.005)
    p_err = nengo.Probe(err, synapse=0.005)

    # Add probe for connection weights (decoders)
    p_weights = nengo.Probe(conn, "weights", sample_every=simtime)

# Run the simulation
with nengo.Simulator(model) as sim:

# Save the learned decoders
weights =[p_weights][-1]


# Model with loaded weights and no learning
with nengo.Network() as model2:
    inp = nengo.Node(lambda t: np.sin(2 * t * np.pi))

    # Create the ensembles
    pre = nengo.Ensemble(30, 1, seed=seed)
    post = nengo.Ensemble(30, 1)
    nengo.Connection(inp, pre)

    # Create a connection from pre (neurons) to post (ensemble) using the saved
    # decoder weights
    nengo.Connection(pre.neurons, post, transform=weights)

    p_in = nengo.Probe(inp)
    p_out = nengo.Probe(post, synapse=0.005)

# Run the second model
with nengo.Simulator(model2) as sim2:


# In the plots, you should see that the first figure show the network learning the
# function y=-x. The second figure should show that the static network is able
# to replicate the learned function without any additional learning.

Thanks @YL0910 and @xchoo. Actually the code

conn = nengo.Connections (ens_a, ens_b, transform = weights)

Thant @YL0910 suggested is not working due to size issues, but from @xchoo I see that it needs to be from neurons

nengo.Connection(pre.neurons, post, transform=weights)

This is what I missed.

Thank you very much!