Deterministically running Nengo models by setting seeds

I’m trying to run a Nengo SPA model so that I can get the same results each time. I understand the following are required:

  • Passing in a rng when initialising the SPA vocab, so that the vocab is assigned the same vectors each run
  • The top level network seed must be set so that the encoders and decoders given the same values each run
  • The simulator seed must be set if you’re using any stochastic processes
  • All other distributions or other Nengo objects used and initialised outside of the Nengo model must be given their own seed

Is there anything else I’m missing? Was this comprehensively documented elsewhere?

1 Like

I’m not 100% sure about SPA stuff, but those are all the things that I can think of. :+1: :rainbow:

That’s all I can think of, but I’ll sometimes be paranoid and set the global numpy seed (and the python random seed if I’m being really paranoid), just to make sure:

np.random.seed(seed)
random.seed(seed)

That said, if either of those seeds actually affects something that isn’t caught by the seeds that you’re already setting, then I’d consider that to be a bug.

I have a Nengo model controlling a robot. I do not use any stochastic processes inside Nengo and all the non-Nengo code is deterministic, so to ensure reproducibility, I only set the seed for the top-level network. Indeed, when I run my model using Spyder, this results in the same behavior of the robot across different simulation runs. If I change the seed, the behavior of the robot somewhat changes, because the Nengo controller seems to be very sensitive to the actual choice of encoders and decoders. If, instead of using Spyder, I use the Nengo GUI, again the robot behavior is the same for the same seed across different runs. However, this behavior is different from the behavior exhibited when using Spyder, even though in both cases the seed is exactly the same. Is this expected? How can this be explained?

Indeed, models can be sensitive to the actual choice of encoders, intercepts etc. It might be the case that changing the distributions that the parameters are drawn from will make your models more robust.

As to the difference between behavior in Spyder and the GUI, I’m not 100% sure, but my guess would be that the Python environment that you’re using in Spyder is different from the one in the GUI. You can verify this by running a script in both environments that has the line

import sys
print(sys.executable)

It will print out the location of the python program that ends up being run. If they’re not the same, then it’s likely that they’re using different installation of NumPy. NumPy is how we generate random numbers; I’m not sure about the details of how it draws random numbers, but it’s possible that you would get different results with the same seed using different versions of NumPy.

My guess is that you are running into the issue documented here. Essentially, there is a bug which can cause the seeds of individual Nengo objects to change in the Nengo GUI even though a top level seed has been set.

1 Like

Thank you for the suggestion. I will definitely consider this.[quote=“tbekolay, post:5, topic:143”]
As to the difference between behavior in Spyder and the GUI, I’m not 100% sure, but my guess would be that the Python environment that you’re using in Spyder is different from the one in the GUI.
[/quote]

I checked that using the suggested script, but I have found that the Python executable in both cases is the same. It seems that @jgosmann guess is correct in this case.

Thank you. It looks like you are right. I verified that by removing all sliders and plots from the Nengo GUI and then running the code again using the same seed as before. Now the behavior was the same as when the code was run using Spyder.