Hebbian Learning for All-to-All connected neurons

Hello folks,

I am trying to learn the weights for connections between the neurons of 2 ensembles (connected in all to all fashion). I used a transform matrix initialized with zeros, but it is not learning any weights. This is my code (I have used only one training sample for now):

with nengo.Network() as model:

    x_inp = nengo.Node([1, 0, 0, 0, 1])

    x = nengo.Ensemble(5, 1, intercepts=Choice([0.1]), max_rates=Choice([100]), neuron_type=nengo.LIF())

    nengo.Connection(x_inp, x.neurons)

    y_inp = nengo.Node([0, 1, 0, 1, 0])

    y = nengo.Ensemble(5, 1, intercepts=Choice([0.1]), max_rates=Choice([100]), neuron_type=nengo.LIF())

    nengo.Connection(y_inp, y.neurons)

    conn = nengo.Connection(x.neurons, y.neurons, transform=weights, learning_rule_type=nengo.learning_rules.BCM()

However my initial weights still remains an all zero matrix (i.e. they do not get updated), and the connection weights are also all zeros. How can I correct this?

Hi @RohanAj, probably since you are using an all-zero matrix, this is equivalent to not having input at all In the network (since the input is multiplied by the weights which are all zero in your case), try to use random initiated weights you should be able to see the learning happens.

Hi @Timodz, thanks for the reply. Actually the problem was that I wasn’t extracting the weights properly. It gave me the correct output when I probed the weights instead. I just discovered this a few hours back.

Hi @RohanAj

Indeed! You didn’t provide the code you were using to gather the connection weights, but if you use this:

with nengo.Simulator(model) as sim:
    sim.run(10)
    updated_weights = sim.model.params[conn].weights

This code will only return the initial weights for the connection. To get at the updated weights, you can either use nengo.Probe(conn, "weights") as per this example, or you can get it by looking at the sim.signals dictionary like so:

with nengo.Simulator(model) as sim:
    sim.run(10)
    updated_weights = sim.signals[sim.model.sig[conn]["weights"]]

Thanks @xchoo

I was using this earlier and getting an all zero matrix :sweat_smile:

Hey @xchoo
Also, is the STDP learning rule not available in Nengo 3.0 ? I used the STDP code that you had provided on another discussion, but just wanted to know if there is an inbuilt function for STDP

If you are referring to the STDP code I provided here or here, then no, there is currently no built-in learning rule class for STDP.

The STDP code I provided was based on some pre-development work by one of our Nengo developers and we haven’t yet had the chance to properly integrate it (due to its complexity and numerous test cases) into the core Nengo code base.

@xchoo, I can use that STDP code for direct neuron connections (instead of encoded-decoded vectors) also right?

Also, how should I cite the STDP code if I use it in my research project?

Yes you can. If you look at the example code I uploaded to this forum post (scroll down to where it says “STDP Approach”, there is an example_stdp.py file), you’ll see I used it for a neuron-to-neuron connection.

That’s a very good question! I’m not entirely sure. I suppose you can cite one of the two forum posts that link to the stdp code.

Hi @xchoo,
I tried using the STDP code with NengoDL simulator, however I received this error

Hey @xchoo ,
I also wanted to ask, do the inbuilt learning rules work the same way on Nengo and NengoDL?

For the learning rules that come with the Nengo install, they work the same way with NengoDL.

This is to be expected. The STDP code I provided has only been coded to work with Nengo. If you want to use it with NengoDL, you’ll need to write a custom builder functions for the STDP learning rule.
To do this you’ll need to define and register a NengoBuilder function (e.g., here is the function for the PES learning rule), as well as to define and register an OpBuilder function to translates the learning rule function to TensorFlow signals and operations (e.g. this is the function for the PES learning rule).

I am not very familiar with programming in TensorFlow, so I’m not sure how much help I can be to you in this case. @drasmuss or @Eric may be able to provide further assistance to help you convert the STDP Nengo code into something that is NengoDL compatible.

Hey @xchoo ,
Thanks for the help !

Is there an equivalent NengoDL code for this? It showed that ‘signals’ attribute is not available when I switched the simulator from Nengo to NengoDL. My RAM is repeatedly crashing due to probing the weights for all time steps

Unfortunately, the sim.signals attribute is only in Nengo, and NengoDL uses a different data architecture to store all of the values related to the simulation.

However, it seems like what you are attempting to do is get the updated weights at the end of the simulation. To accomplish this, you can use the sample_every option available with all nengo.Probe objects to get it to record the weights only at the end of the simulation.

As an example, by setting the sample_every value to the same time as the total length of the simulation (see the code below), the probe will only record the weights at the last timestep of the simulation.

sim_runtime = 10  # Amount of time you want to run the simulation for
with nengo.Network() as model:
    ... # define Nengo model
 
    probe_weights = nengo.Probe(conn, "weights", sample_every=sim_runtime)

with nengo_dl.Simulator(model) as sim:
    sim.run(sim_runtime)

You can identify which timestamp the probed data is associated with by using the same value for the sample_every parameter with sim.trange():

sim.trange(sample_every=sim_runtime)  # Returns [10.], indicating probes with this sample interval have values recorded only at t=10.

Since the probed data only contains data for the last timestep, you can then retrieve the weights with:

updated_weights = sim.data[probe_weights][-1]

Here’s an example script with all of the things I mentioned above, as well as code that demonstrates how to retrieve the initial connections weights used in the model. test_weight_probe.py (3.0 KB)
Feel free to play around with the sample_every parameter (for both nengo.Probe and sim.trange()) to see what it does! :smiley:

@xchoo Thank you so much for the help !!!

NengoDL also has the keep_history configuration option, which you can either set globally, or for an individual probe. If you set keep_history=False, then NengoDL will just store the most recent value for the probe, rather than values for all times.

1 Like