Learning without objective function

I’ve been quite trapped in the learning process in Nengo. Is learning possible in Nengo without using the objective function? If yes, then how?

Why do you want to learn without an objective function? What do you hope to accomplish with this learning? I’m really not clear on what your question is asking.

Hi @shruti_sneha, welcome to the forum!

It’s quite difficult to pin things down when it comes to learning. For one, there are a lot of ways that people use the word “learning” that differ depending on the field… Are we talking about synaptic weight changes from plasticity experiments? Are we talking about a machine learning gradient descent technique? Are we talking about adaptation from one trial to the next? Learning means a lot of things to a lot of people, so the more precise you can be in your question, the easier it is to answer!

Since you mention objective functions, I’m guessing that the type of learning that you’re referring to is grounded in machine learning. The way that machine learning talks about learning rules, in terms of minimizing or maximizing some objective function, does not have a clear correspondence to the way that we talk about learning in Nengo. You could think of each Nengo learning rule as minimizing some objective function, but in most cases the inputs and outputs to that function would not be the same inputs and outputs that you would have in a learning rule in machine learning.

The learning rules with the clearest connection are the PES learning rule and the delta rule. Because PES is defined in the decoded vector space, it has a pretty direct mapping to the delta rule, which I talk about in my master’s thesis. However, learning rules like BCM and Oja are defined in terms of neural activities and synaptic weights, which do not have a obvious mappings to the decoded vector space.

Take a look at the thesis I linked above, and the learning examples in Nengo. We’d be happy to answer specific questions about those examples, and it’d also be helpful if you can provide more background as far as what kind of learning you’re talking about. :slight_smile:

I apologise for not providing the sufficient details to my query. Yes i am talking about the machine learning. To be clear, since machine learning is an automated process and we usually don’t tell the network to perform what functions. As for each set of inputs, the network has its own output sets and according to the given output, the network automatically updates the synaptic weights based on the error. CNN works this way. Now in Nengo, if we simply have to calculate the square of given number, then is it possible to just provide the input and its desired output without instructing the network to follow what mathematical function for finding the square of that number ? This is the objective function, I was talking about in my query.

If its not possible then how can we say that the network is learning? If it is possible then how the network is updating its synaptic weights?

Thanks for the background info. :slight_smile:

In Trevor’s reply, he links to some of the documentation we have on learning. Can you help us help you by telling us what you understood and didn’t understand from those examples?

Hi @shruti_sneha! This idea of defining a function implicitly by giving Nengo a set of points, rather than explicitly with an actual mathematical function, is something we’ve been playing around with for a while. We found it useful, so we made it easier in our most recent release (v2.2.0).

Here’s an example that shows how to do that in a simple one-dimensional case like you described (requires Nengo v2.2.0):

import matplotlib.pyplot as plt
import numpy as np
import nengo

m = 200  # number of training points
n = 100  # number of neurons

# --- create x and y points for target function (square)
rng = np.random.RandomState(0)
x = rng.uniform(-1, 1, size=(m, 1))         # random points along x-axis
y = x**2 + rng.normal(0, 0.1, size=(m, 1))  # square of x points plus noise

with nengo.Network() as model:
    a = nengo.Ensemble(n, 1)
    b = nengo.Ensemble(n, 1)
    c = nengo.Connection(a, b, eval_points=x, function=y)

with nengo.Simulator(model) as sim:
    pass

x2 = np.linspace(-1, 1, 100).reshape(-1, 1)
x2, _, y2 = nengo.utils.connection.eval_point_decoding(c, sim, eval_points=x2)

plt.plot(x, y, 'k.')
plt.plot(x2, y2, 'b-')
plt.show()

You can see that my function is defined implicitly by the points x and y. I create these points, but in practice they’d probably come from some dataset, or real-world measurements. Then, when I create the connection c to compute this function, I pass the x and y points. The last part of the code plots how accurately this connection is able to compute the function, showing the noisy training points in black and the computed function in blue.

If you want a more interesting example using images, check out this example using the MNIST digits.

3 Likes

Hi Eric !!
Thank you so much for such an efficient code for making me understand. I got your point and i tried to run this code which gave me the desired output… hey Thanks again :smiley: