Learning without objective function

Hi @shruti_sneha, welcome to the forum!

It’s quite difficult to pin things down when it comes to learning. For one, there are a lot of ways that people use the word “learning” that differ depending on the field… Are we talking about synaptic weight changes from plasticity experiments? Are we talking about a machine learning gradient descent technique? Are we talking about adaptation from one trial to the next? Learning means a lot of things to a lot of people, so the more precise you can be in your question, the easier it is to answer!

Since you mention objective functions, I’m guessing that the type of learning that you’re referring to is grounded in machine learning. The way that machine learning talks about learning rules, in terms of minimizing or maximizing some objective function, does not have a clear correspondence to the way that we talk about learning in Nengo. You could think of each Nengo learning rule as minimizing some objective function, but in most cases the inputs and outputs to that function would not be the same inputs and outputs that you would have in a learning rule in machine learning.

The learning rules with the clearest connection are the PES learning rule and the delta rule. Because PES is defined in the decoded vector space, it has a pretty direct mapping to the delta rule, which I talk about in my master’s thesis. However, learning rules like BCM and Oja are defined in terms of neural activities and synaptic weights, which do not have a obvious mappings to the decoded vector space.

Take a look at the thesis I linked above, and the learning examples in Nengo. We’d be happy to answer specific questions about those examples, and it’d also be helpful if you can provide more background as far as what kind of learning you’re talking about. :slight_smile: