How to improve the effectiveness of the PES learning rule?

Hello everyone, I’m trying to use PES for a simple data fitting task, and the code is as follows. However, the results are not satisfactory, with a large error. I have also tried adjusting the learning rate (from 1e-1 to 1e-3), but the results are still not ideal. How should I adjust in this situation? Additionally, I noticed that when fitting one-dimensional data, the error is much smaller. So, when the data dimensionality increases, how should I adjust to achieve better fitting results?The output of the above program is [0.14 0.19].

Best regards.

import numpy as np
import nengo

q = 2
model = nengo.Network()
Connection_PES = []
with model:
    # -- input and pre popluation
    inp = nengo.Node([-0.04, 0.13], size_out=2)
    pre = nengo.Ensemble(300, dimensions=2)
    post = nengo.Ensemble(300, dimensions=2)
    error = nengo.Ensemble(300, dimensions=2)
    nengo.Connection(inp[0], pre[0])
    nengo.Connection(inp[1], pre[1])
    nengo.Connection(inp[0], error[0])
    nengo.Connection(inp[1], error[1])
    nengo.Connection(post[0], error[0], transform=-1)
    nengo.Connection(post[1], error[1], transform=-1)
    nengo.Connection(pre, post, learning_rule_type=nengo.PES(learning_rate=1e-1))
    post_probe = nengo.Probe(post)

with nengo.Simulator(model) as sim:
    sim.run(10)
a = sim.data[post_probe]
print(a[-1, :])

Here are a few tips:

  • Rather than printing a single number, plotting the results will give you much more information and intuition for what’s going on.
  • Your output probe has no synapse on it. This means that the results are not being filtered at all, and are thus quite noisy. When you take the last value with your print(a[-1, :]) statement, there’s a lot of variability there (you’ll also notice this if you run your script above a number of times). Using nengo.Probe(post, synapse=0.03) will add a 30 ms filter, which will make things much cleaner.
  • Your input values (which is what you’re learning to match) are quite close to zero. This can make it difficult to tell whether the network is learning anything. If the actual values that you need to match are small, you may want to change the radius on your ensembles so that they can represent a smaller range (and thus represent it better).
  • When you’re working with 2D ensembles, you don’t need separate connections for each dimension. e.g. you can do nengo.Connection(inp, pre). The transform in this case defaults to an identity matrix transform.

Thank you very much for your detailed response! I am currently exploring a method: while fitting multidimensional data, is it feasible for Nengo to give precedence to locking dimensions with errors below a specified tolerance value during the training process, before proceeding to train the remaining dimensions?
Any suggestions are welcome!