Approximate this func means it should able to approximate classification function properly like in this case of if x,y,z = 100 and w =1 as per my output_func output should near c(1) but I seen output is around 0.3 or something it is above zero but not accurate I understand that neurons are approximating functions
In the first code snippet you posted, this should have been what you observed when you ran the model (i.e. it should correctly approximate the function at all times). That is because the function isn’t being approximated in neurons. So let me know if that is not the case, and we can look into why that is.
In your second code snippet you are using the neural activities/connection weights to do the function approximation, so then we expect to see approximation errors, as you are seeing. In general, as you mentioned, the results for discrete function approximation will not be great. That is because most of the default parameter initializations and optimizations are set up to assume that you want to do continuous function approximation.
There are a lot of different things you can try to do to improve the approximation accuracy. One would be, as you mentioned, to adjust the tuning curves of the neurons. Basically you want your tuning curves to align with the shape of the function you want to represent, so if you want to represent a discrete function then you want your tuning curves (in particular, the intercepts) to line up with the discrete function boundaries.
Another option is to simplify the input space. I’m not sure what your use case is, but often when doing discrete function approximation we also have a more limited input space. By default, Nengo is assuming that you want to optimize across the whole input space, but if that is not the case we can use the eval_points
parameter to specify the input values we care about (e.g., your example of x,y,z = 100 and w =1). This will allow the optimization process to focus more on those input values you care about, and you’ll get a cleaner output. For example, if you run this example I suspect you’ll see something closer to what you were expecting:
import itertools
import nengo
import numpy as np
a = -1
b = 0
c = 1
n_neurons = 1000
def output_func(arr):
x = arr[0]
y = arr[1]
z = arr[2]
w = arr[3]
rNo=(x*y*z)/w
# print(rNo)
if rNo <= 0.33:
return a
elif 0.33 < rNo < 0.66:
return b
else:
return c
def input_func(t):
return [1] * 4
with nengo.Network() as model:
vel=nengo.Node(input_func)
output = nengo.Node(size_in=1)
network =nengo.Ensemble(1000,dimensions=4,radius=1,
eval_points=np.random.uniform(0, 1, size=(10000, 4))
)
nengo.Connection(vel,network)
nengo.Connection(network, output, function=output_func)
(here I’m setting the evaluation points to all be between 0 and 1, rather than -1 to 1).
I seen nengo has learning module so will learning change tuning curves ?
Nengo has learning rules that can modify the tuning curves, but none of the current rules modify the intercept/bias, which as mentioned above is probably the most important parameter for approximating a discrete function. However, you could use the sim.train
function in NengoDL to learn intercept values.
Is it possible to set radius purely positive or purely negative?
The radius can only be positive, which represents the range (-radius, radius). But you can think of that method above (manually setting the eval_points
) as bypassing the radius, and directly specifying the representational range we care about. That isn’t the whole story for how the radius works, but I think it’s what you were looking for.