hPES or supervised learning rule in Q/A example?

The example which I am interested in is this:


I am eager to know which kind of learning rule is used in the package for simulating this example?
Is it the simultaneously supervised and unsupervised one (hPES)?
If so, what can we learn from this example?

Thank you in advance

There is no online learning in that example. If you are wondering how the weights are computed, in that case it is done using the Neural Engineering Framework (NEF), which you can read more about here.

1 Like

Thank you very much.

sorry, I meant this one:


Would you mind please introduce me four examples which work with supervised, unsupervised, hPES, and reinforcement learning respectively?

There is no online learning in that example either (none of the SPA examples have learning in them).

You can find all of the Nengo learning examples here https://www.nengo.ai/nengo/examples#learning. That includes supervised (PES) and unsupervised (Voja, Oja, BCM) learning. hPES refers to a combination of those rules (e.g. PES + BCM).

You can find some previous discussion of reinforcement learning (including links to examples and papers) in these threads:

Hope that helps!

1 Like