Encemble failes to approximate simple discreat function?

M quite new bee in Nengo and just learning stuff .
Form book How to build brain I found that neurons are bad for discrete function approximations.

So my question is Could neuron approximate simple categorization that was a basic question and most of the categories are kind of discrete function of some continuous values so in my case I have category a ,b and c which are dependent on x,y,z,w values and range of x,y,z,w is [1,100]

import nengo
import numpy as np

a = -1
b =0
c =1

n_neurons =1000

def stimulus_fn(arr):
    x=arr[0]
    y = arr[1]
    z=arr[2]
    w = arr[3]
    rNo=(x*y*z)/w
   # print(rNo)
    if rNo <= 2300:
        return a
    elif 2300 < rNo <4000:
        return b
    else:
        return c
        
        
        
with nengo.Network() as model:
    vel=nengo.Node([1,1,1,1])
    
    network =nengo.Ensemble(1000,dimensions=1,radius=2)
    
    nengo.Connection(vel,network,function=stimulus_fn)

But what I found is it is that it fails to approximate this simple discrete function So am I doing anything wrong in this?

Hello ganesh,

When you say that it “fails to approximate this function”, what do you mean? What do you expect to see, compared to what you are seeing? When I run your code, I see the expected result (ensemble switching between -1, 0, and 1 as appropriate).

As a sidenote, in your model you aren’t actually approximating the function using neurons. Because the function is set on a connection from a Node, it is being done completely artificially in Python. If you wanted to approximate your function using neurons then you would need to set that function on an output Connection from an Ensemble, rather than a Node. But that is a separate issue than the one you are asking about.

is it like as follows I should set function between output node and ensemble

  def output_func(arr):
    x=arr[0]
    y = arr[1]
    z=arr[2]
    w = arr[3]
    rNo=(x*y*z)/w
   # print(rNo)
    if rNo <= 2300:
        return a
    elif 2300 < rNo <4000:
        return b
    else:
        return c
        
        
def input_func(t):
    return [np.random.uniform(1,100) for _ in range(4)] 

with nengo.Network() as model:
    vel=nengo.Node(input_func)
    output = nengo.Node(size_in=1)
    network =nengo.Ensemble(1000,dimensions=4,radius=100)
    
    nengo.Connection(vel,network)
    nengo.Connection(network, output, function=output_func)

Approximate this func means it should able to approximate classification function properly like in this case of if x,y,z = 100 and w =1 as per my output_func output should near c(1) but I seen output is around 0.3 or something it is above zero but not accurate I understand that neurons are approximating functions so hence
I also feel I need to tune tuning curves to approximate this function properly
I seen nengo has learning module so will learning change tuning curves ?
Is it possible to set radius purely positive or purely negative?

Approximate this func means it should able to approximate classification function properly like in this case of if x,y,z = 100 and w =1 as per my output_func output should near c(1) but I seen output is around 0.3 or something it is above zero but not accurate I understand that neurons are approximating functions

In the first code snippet you posted, this should have been what you observed when you ran the model (i.e. it should correctly approximate the function at all times). That is because the function isn’t being approximated in neurons. So let me know if that is not the case, and we can look into why that is.

In your second code snippet you are using the neural activities/connection weights to do the function approximation, so then we expect to see approximation errors, as you are seeing. In general, as you mentioned, the results for discrete function approximation will not be great. That is because most of the default parameter initializations and optimizations are set up to assume that you want to do continuous function approximation.

There are a lot of different things you can try to do to improve the approximation accuracy. One would be, as you mentioned, to adjust the tuning curves of the neurons. Basically you want your tuning curves to align with the shape of the function you want to represent, so if you want to represent a discrete function then you want your tuning curves (in particular, the intercepts) to line up with the discrete function boundaries.

Another option is to simplify the input space. I’m not sure what your use case is, but often when doing discrete function approximation we also have a more limited input space. By default, Nengo is assuming that you want to optimize across the whole input space, but if that is not the case we can use the eval_points parameter to specify the input values we care about (e.g., your example of x,y,z = 100 and w =1). This will allow the optimization process to focus more on those input values you care about, and you’ll get a cleaner output. For example, if you run this example I suspect you’ll see something closer to what you were expecting:

import itertools

import nengo
import numpy as np

a = -1
b = 0
c = 1

n_neurons = 1000


def output_func(arr):
    x = arr[0]
    y = arr[1]
    z = arr[2]
    w = arr[3]
    rNo=(x*y*z)/w
   # print(rNo)
    if rNo <= 0.33:
        return a
    elif 0.33 < rNo < 0.66:
        return b
    else:
        return c
        
        
def input_func(t):
    return [1] * 4

with nengo.Network() as model:
    vel=nengo.Node(input_func)
    output = nengo.Node(size_in=1)
    network =nengo.Ensemble(1000,dimensions=4,radius=1, 
        eval_points=np.random.uniform(0, 1, size=(10000, 4))
    )
    
    nengo.Connection(vel,network)
    nengo.Connection(network, output, function=output_func)

(here I’m setting the evaluation points to all be between 0 and 1, rather than -1 to 1).

I seen nengo has learning module so will learning change tuning curves ?

Nengo has learning rules that can modify the tuning curves, but none of the current rules modify the intercept/bias, which as mentioned above is probably the most important parameter for approximating a discrete function. However, you could use the sim.train function in NengoDL to learn intercept values.

Is it possible to set radius purely positive or purely negative?

The radius can only be positive, which represents the range (-radius, radius). But you can think of that method above (manually setting the eval_points) as bypassing the radius, and directly specifying the representational range we care about. That isn’t the whole story for how the radius works, but I think it’s what you were looking for.

2 Likes