Here’s a minimal example of what I was thinking:
a = nengo.Ensemble(n_neurons=100, dimensions=1)
b = nengo.Ensemble(n_neurons=50, dimensions=1)
w = 2*np.random.randn(b.n_neurons, a.n_neurons)/b.n_neurons
def my_rule(t, input):
global w
output = np.dot(w, input)*0.001
w += np.random.randn(*w.shape)*0.01 # learning rule
return output
learner = nengo.Node(my_rule, size_in=a.n_neurons,
size_out=b.n_neurons)
nengo.Connection(a.neurons, learner, synapse=None)
nengo.Connection(learner, b.neurons, synapse=0.05)
So the node is doing the weight matrix, and then applying the learning rule.
A better way to implement this, though, would be to either do it as a Node subclass, or even better as a Process (since then it can support resetting the simulation). But the same basic idea would be used.