Can "too much" neurons harm the model performance?

I’ve created a model with one Ensemble that its decoder is optimized to calculate some complex mathematical formula.

Intuitively, I found a sweet spot of number of neurons that the model works really well with and when I add more neurons, the model’s performance decreases (I don’t use learning).

Is that a known thing with SNN or do I have some issue with my model?

Hmm, adding more neurons should never decrease accuracy, typically it’s just compute constraints that limit number of neurons. The function approximation improves with sqrt(n) where n is the number of neurons. It sounds like a model issue. If you have a minimal example script I could take a quick look.

Hmm, so that is wired. I built a simple motion model with one three-dimension ensemble:

model = nengo.Network(seed=_seed)
with model:
    sensors_node = nengo.Node(get_sensors_reading, size_out=3)
    sensors = nengo.Ensemble(1000, dimensions=3, radius=math.sqrt(3))
    nengo.Connection(sensors_node, sensors)
    output_node = nengo.Node(actuators_command)
    nengo.Connection(sensors, output_node, function=calc_motion)

The sensors ensemble represents 3 values that I get from the sensors. Then, the synapse that connects the sensors ensemble to the output_node node, calculates my motion formula.

Since it is a three-dimensional ensemble and complex formula, I wanted to evaluate how many neurons my model needs.

I tested the motion in a physical simulator and calculated the error based on the optimal path. But I see that when I add “too many” neurons, the error increases…

image

It seems that the error decreases until 1,000 neurons (lower is better) and then increases again…

I would expect that the 10,000 neurons would be much more accurate than the 1,000 neurons (not 50% worse) and even better than 250 neurons, but the results show a different thing…

It’s difficult to say without being able to run the script myself. My first guess would be to double check your evaluation code. Can you plot the output vs ideal to make sure that 250 neurons actually gives the best looking output? Script looks fine. If that’s not the case I’d need to have all the code for running it to say

OK, so after examining the results carefuly, it seems that 250 and 1000 neurons’ motion was closer to the optimal path by the position but were much noisier - moved aggressively in sharp movements. The 10,000 neurons’ motion was further from the optimal path but much smoother (almost like the optimal path, but with some drift).

That’s a really interesting perspective of SNNs…

Anyway, I believe that it is related to how I measure the “error”. Thank you for the help!

1 Like

By the way, can you please explain this?

yw! this has happened to me several times

for sure, if you imagine a plot with “function approximation accuracy” on the y axis and “number of neurons” on the x axis, the accuracy as you increase the number of neurons should fit the function sqrt(n). does that help?

Hmm, interesting. But two questions:

  1. What is the math behind this claim?

  2. What is the scale of that “y-axis”? I mean, if I have a simple function, like an identity function or a complex movement function, it seems that the “accuracy” will be the same for the same number of neurons. But identity function can be fairly accurate with, say 100 neurons, but my complex motion function requires a high number of neurons to act fairly. Even if I use the identity function in two models, one with radius 1 and the other one with radius 2, the second model would need more neurons to be accurate as the first model.
    So, how can I evaluate the “accuracy” from sqrt(n), or know how many neurons my model needs for x% accuracy? It depends on the “y-axis” scale…