LIF neuron activity

That is correct, the learning rule will update the individual elements in the full connection weight matrix. The reason why you are not seeing any changes is because the code you are using gives you the initial weights set on the connection. This value is not updated as the simulation progresses. To get the weights as the simulation is running, you can do one of two things:

  1. Get the connection weights from the sim.signals dictionary, like so:
weights = sim.signals[sim.model.sig[conn]["weights"]]
  1. Get the connection weights using a Nengo probe:
with model:
    probe_weights = nengo.Probe(conn, "weights", sample_every=<sampling_interval_in_secs>)

with nengo.Simulator(model) as sim:
    ...

print(sim.data[probe_weights])

If you want to get the learned weights at the end of a simulation run, you can set the sample_every parameter to the runtime length of the simulation:

probe_weights = nengo.Probe(conn, "weights", sample_every=<simulation_runtime>)

I’m not entirely sure what you want to achieve here, since your previous question indicates you are applying a learning rule to the connection as well. When we construct our Nengo models, connections with learning rules are typically initialized with a random function to demonstrate that the learning rule actually has some effect on the connection.

If you want to create a fully connection weight matrix that implements a specific function, you need to use the NEF algorithm to do so. That is to say, you will need to get Nengo to manually compute the decoders for you, and then multiply it with the encoders to get the full connection weight matrix. There are two methods for coding this, but both methods require you to create a Nengo simulator to build the model. The model build process is what creates the encoders and decoders. Since the Nengo simulator object is created multiple times, you will also need to seed the ensemble objects, to ensure that they are given identical parameters across the different simulations.

So, the general approach for computing the weight matrix is:

# Define ensembles needed for connection weight matrix. Make sure they are seeded
with nengo.Network() as prebuild:
    ens1 = nengo.Ensemble(..., seed=<seed_val1>)
    ens2 = nengo.Ensemble(..., seed=<seed_val2>)
    conn = nengo.Connection(...)

# Create a simulator object to build the ensembles:
with nengo.Simulator(prebuild) as simbuild:
    # Extract out the ensemble parameters and compute the weight matrix
    ...

# If the weight matrix is to be used in a Nengo model, create the Nengo model with the same
# seeds as the `prebuild` model
with nengo.Network() as model:
    ...
    ens1 = nengo.Ensemble(..., seed=<seed_val1>)
    ens2 = nengo.Ensemble(..., seed=<seed_val2>)
    conn = nengo.Connection(ens1.neurons, ens2.neurons, transform=<weight_matrix>)
    ...

The actual computation of the weight matrix can be done in two ways. The first method is basically the NEF algorithm. You solve for the decoders on the pre population that computes the function you want, then you multiply it with the encoders of the post population to get the full weight matrix. Since you are doing everything manually, solving for the decoders requires some work. You’ll need to invoke the Nengo solver directly, and that requires a bit of work. The code to do this is outlined below (put the code in the # Extract out the ensemble parameters section of the code above:

# Reference to built ens1 object
built_ens1 = simbuild.data[ens1]

# Get the "x" values (evaluation points scaled by encoders) for ens1
x_vals = np.dot(built_ens1.eval_points, built_ens1.encoders.T / ens1.radius)

# Get the activity values corresponding to the x values 
activities = ens1.neuron_type.rates(x_vals, built_ens1.gain, built_ens1.bias)

# Create the solver, and use it to solve for the decoders of ens1 that compute a specific output function
solver = nengo.solvers.LstsqL2(weights=True)
decoders, _ = solver(activities, output_func(built_ens1.eval_points))
# # Note that if the ensemble is doing a communication channel, then no `output_func` call is needed:
# decoders, _ = solver(activities, built_ens1.eval_points)

# Compute the full weight matrix by doing the matrix multiplication with the encoders of ens2
weights = np.dot(simbuild.data[ens2].encoders, decoders.T)

The second method to compute the full weight matrix is to use the solvers parameter on a Nengo connection. If you create a connection with a solver that has weights=True, Nengo will make the connection with the full weight matrix. However, one important note is that this weight matrix contains the gains from the post population, and thus, these gains need to be removed to get the full connection weight matrix.

with nengo.Network() as prebuild:
    ...
    # Create the connection with `weights=True`
    conn = nengo.Connection(
        ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True), function=output_func
    )

with nengo.Simulator(prebuild) as simbuild:
    weights = simbuild.data[conn].weights

    # Get the gains for ens2. Shape it so that an element-wise matrix multiplication can be done
    ens2_gains = np.repeat([simbuild.data[ens2].gain], ens2.n_neurons, axis=0).T
    
    # Divide the gains out from the weight matrix to get the full connection weight matrix
    weights = np.multiply(weights, 1.0 / ens2_gains)

NengoDL supports the creation and use of regular Nengo models with the predict/fit/compile/etc., TF functions. So, all you have to do is to create your Nengo model, then use the nengo_dl.Simulator object to call the predict function. Refer to this page for examples on how to use your (regular) Nengo model with NengoDL.

1 Like