Recurrent BCM connection problems

I’m trying to make a recurrent connection that uses the BCM rule for learning. The goal is to make a hopfield-like memory network. However, I have two (connected) problems that I hope you can help me with.

(1) I do not want a connection from a neuron to itself. When I normally make a recurrent connection, I just specify a transformation matrix with a 0-diagonal. However, that doesn’t work with BCM switched on. As an alternative, I’m now iterating over all neurons in my mem ensemble, and then slicing the connections:

mem = nengo.Ensemble(n_neurons_mem, dimensions=D,intercepts=nengo.dists.Uniform(.1,.1))

for x in range(mem.n_neurons):
        if x > 0:
            conn.append(nengo.Connection(mem.neurons[x], mem.neurons[:x], 
                transform=np.zeros((x, 1), dtype=float),
                learning_rule_type = bcm2_rule.BCM2()))
        if x < mem.n_neurons-1:
            conn.append(nengo.Connection(mem.neurons[x], mem.neurons[x+1:], 
                transform=np.zeros((mem.n_neurons-1-x, 1), dtype=float),synapse=.05,
                learning_rule_type = bcm2_rule.BCM2()))

This is not only ugly, but also very slow for a substantial number of neurons, as it creates n_neurons * 2 connections. Is there a better way to do this? Preferably with just a single connection?

(2) While slicing works for the connection, the BCM rule ignored it. I solved it by replacing

    pre_activities = model.sig[get_pre_ens(conn).neurons]['out']
    post_activities = model.sig[get_post_ens(conn).neurons]['out']

with

    pre_activities = model.sig[get_pre_ens(conn).neurons]['out'][conn.pre_slice]
    post_activities = model.sig[get_post_ens(conn).neurons]['out'][conn.post_slice]

in build_bcm. Is that the correct/efficient way to do this?

Thanks for any suggestions!

Could you give an example of what doesn’t work and what you mean by this (an error, or different behaviour than what you expected)?

This sounds like it could be a bug. If so then what you did might be correct, but I’d like to follow up with this to create a GitHub issue and make sure that we resolve the problem. To help could you give an example and show what you mean by it being ignored? Thanks!

What happens is that BCM learns positive weights for the diagonal of the recurrent connection matrix – while I don’t want neuron-to-itself connections. This is expected, which is why I came up with the solution described above. However, that solution makes the model very slow, so I was hoping for a better solution.

That said, I now came up with a better solution: I adapted the learning rule so that you can specify a mask with max weights for each weight in the connection matrix. So I guess this issue is actually solved, although a more general solution might be prefereable, in which you can specify that you don’t want a full weight matrix for a connection.

Sure, here’s an example:

    mem = nengo.Ensemble(10, dimensions=D,intercepts=nengo.dists.Uniform(.1,.1))
    nengo.Connection(mem.neurons[:1], mem.neurons[:5], 
                transform=np.zeros((5, 1), dtype=float),
                learning_rule_type = nengo.BCM())

If I run this, I get the following error:

  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/nengo/builder/learning_rules.py", line 103, in step_simbcm
    alpha * post_filtered * (post_filtered - theta), pre_filtered)
ValueError: could not broadcast input array from shape (10,10) into shape (5,1)

which is caused by the BCM rule not slicing the connection.

Hope this is clear - let me know if you need more info.

Great point. I’ve added this as a consideration to https://github.com/nengo/nengo/issues/1483. In particular, we might be coming up with a way of defining sparse transforms that could satisfy your use case. You can subscribe to that GitHub issue if you are interested in following its progress.

Cool! If you are finding this is working well there may be interest in having it submitted as a PR to nengo-extras. Feel free to do so – that way we at least have a record of the problem and your solution.

Thanks! I’ve posted this bug here: https://github.com/nengo/nengo/issues/1526.