Building a Learning Node for testing purposes

I want to experiment with an alternative BCM rule where the learning-rate is negative and the thresholding factor selects for neurons that are firing less than the average rate. However, I’d rather not have to go through the whole Nengo build system to accomplish this. @tcstewar mentioned doing this with a nengo.Node first to allow for quick iteration. However, I’m unclear how to do this properly? Are there any examples of this in any repositories?

The best I can currently come up with is to have a nengo.Node that takes in both the pre and post ensemble’s spikes, while outputting a modification of the pre filtered spikes given the saved state of the learning rule. Is this how everyone else has done it?

Here’s a minimal example of what I was thinking:

a = nengo.Ensemble(n_neurons=100, dimensions=1)
b = nengo.Ensemble(n_neurons=50, dimensions=1)

w = 2*np.random.randn(b.n_neurons, a.n_neurons)/b.n_neurons
def my_rule(t, input):
    global w
    output = np.dot(w, input)*0.001
    w += np.random.randn(*w.shape)*0.01   # learning rule
    return output
    
learner = nengo.Node(my_rule, size_in=a.n_neurons,
                     size_out=b.n_neurons)
                     
nengo.Connection(a.neurons, learner, synapse=None)
nengo.Connection(learner, b.neurons, synapse=0.05)

So the node is doing the weight matrix, and then applying the learning rule.

A better way to implement this, though, would be to either do it as a Node subclass, or even better as a Process (since then it can support resetting the simulation). But the same basic idea would be used.

1 Like

For ease of future reference, here is my implementation the BCM learning rule for inclusion in a node:

class FakeBCM(object):

    def __init__(self, learning_rate=1e-9, in_neurons=4, out_neurons=2, theta_tau=1.0,
                 sample_every=0.1, start_weights=None):
        self.kappa = learning_rate * dt
        assert start_weights is not None
        self.omega = start_weights.copy()
        self.in_nrns = in_neurons
        self.lowpass = nengo.Lowpass(theta_tau).make_step(out_neurons, out_neurons, dt, None)
        self.weight_history = []
        self.period = sample_every / dt

    def bcm_func(self, t, x):
        in_rates = x[:self.in_nrns]
        out_rates = x[self.in_nrns:]
        theta = self.lowpass(t, out_rates)

        self.omega += np.outer(self.kappa * out_rates * (out_rates - theta), in_rates)

        if (t / dt % self.period) < 1:
            self.weight_history.append(self.omega.copy())

        return np.dot(self.omega, in_rates)

You can find example use and comparison to regular BCM in my repository.

1 Like