I’m trying to make a recurrent connection that uses the BCM rule for learning. The goal is to make a hopfield-like memory network. However, I have two (connected) problems that I hope you can help me with.
(1) I do not want a connection from a neuron to itself. When I normally make a recurrent connection, I just specify a transformation matrix with a 0-diagonal. However, that doesn’t work with BCM switched on. As an alternative, I’m now iterating over all neurons in my mem ensemble, and then slicing the connections:
mem = nengo.Ensemble(n_neurons_mem, dimensions=D,intercepts=nengo.dists.Uniform(.1,.1))
for x in range(mem.n_neurons):
if x > 0:
conn.append(nengo.Connection(mem.neurons[x], mem.neurons[:x],
transform=np.zeros((x, 1), dtype=float),
learning_rule_type = bcm2_rule.BCM2()))
if x < mem.n_neurons-1:
conn.append(nengo.Connection(mem.neurons[x], mem.neurons[x+1:],
transform=np.zeros((mem.n_neurons-1-x, 1), dtype=float),synapse=.05,
learning_rule_type = bcm2_rule.BCM2()))
This is not only ugly, but also very slow for a substantial number of neurons, as it creates n_neurons * 2 connections. Is there a better way to do this? Preferably with just a single connection?
(2) While slicing works for the connection, the BCM rule ignored it. I solved it by replacing
pre_activities = model.sig[get_pre_ens(conn).neurons]['out']
post_activities = model.sig[get_post_ens(conn).neurons]['out']
with
pre_activities = model.sig[get_pre_ens(conn).neurons]['out'][conn.pre_slice]
post_activities = model.sig[get_post_ens(conn).neurons]['out'][conn.post_slice]
in build_bcm. Is that the correct/efficient way to do this?
Thanks for any suggestions!