Convention for Error Population in PES Learning?

When playing around with PES Learning, I stumbled across some (at least for me) unexpected behaviour. Here is a minimal example (taken and slightly adapted from @tcstewar’s nengo_learning_examples repository):

import nengo
import numpy as np

model = nengo.Network()
with model:
    stim = nengo.Node(0)
    
    pre = nengo.Ensemble(n_neurons=100, dimensions=1)
    nengo.Connection(stim, pre)
    
    post = nengo.Ensemble(n_neurons=100, dimensions=1)
    
    def init_func(x):
        return 0
    learn_conn = nengo.Connection(pre, post, function=init_func,
                                  learning_rule_type=nengo.PES())
                                  
    error = nengo.Ensemble(n_neurons=100, dimensions=1)
    
    def desired_func(x):
        # adjust this to change what function is learned
        return x
    nengo.Connection(stim, error, function=desired_func, transform=1)
    nengo.Connection(post, error, transform=-1)
    
    nengo.Connection(error, learn_conn.learning_rule)
    
    stop_learn = nengo.Node(0)
    nengo.Connection(stop_learn, error.neurons, transform=-10*np.ones((100,1)))

The only thing I changed compared to the original code is swapping the sign of the transforms of the connections to the error population and both nodes to 0. When I run this in the Nengo GUI, after some seconds the post-population starts running towards either 1 or -1 even without any input from stim. The direction seems to be random.
Is this desired behaviour? If yes, what is the purpose/reason? Is it a convention that the post-synaptic population has to be connected with positive sign to the error population when using the PES learning rule?

Our convention for error is error = acutal - target, where actual is the current output of the post population and target is what you want it to be. So yes, in your case you want to switch the signs on both your connections into the error population.

We changed the sign of our error a while back. We used to use error = target - actual, but we found that this was opposite of how error is typically presented in the literature (particularly the engineering/machine learning literature).

1 Like

In addition to what @Eric said, the direction it flies off to is essentially random because the noisy activity of your error population will start at zero, but it won’t be quite zero, so whichever direction it randomly deviates from zero will get continually reinforced, because you’re pushing yourself further and further in the direction of the error.

If you set the seed (nengo.Network(seed=0)) then it will always go off in the same direction. It depends on the baseline activity of the error population, which is made deterministic by setting the seed (i.e., there’s no randomness in the learning rule itself).

1 Like

@Eric, @tbekolay Thanks for your clarifying explanations