I’m using Nengo to control robot simulation. The simulation is deterministic and predictable by definition, but when I control it through Nengo, I get slightly different results every time.
- Nengo Network top level seed by
model = nengo.Network(seed=123)
- Numpy random seed
- Python random seed
I’m using the
nengo.Simulator (I didn’t put seed here because it uses network seed + 1) and get slightly different results every time.
I’m using the same Python environment and Nengo as a Python script (without Nengo GUI). No threads and subprocesses were used.
To make it clear, when I control the simulation with a deterministic algorithm, I get exactly the same results every time, but this is not the case with Nengo.
What could be the reason for that?
I assume you are using GPU? If so, cuda/cudann has its own set of seeds. It also has a .deterministic flag that can affect some computations.
No, actually CPU (Nengo Core).
I’m not seeing any obvious things that you’re missing in terms of seeding.
The next step would be to try to determine where the non-determinism is coming from. For example, if you put probes on the input to and output from your robot simulator (I assume it’s being connected to inside a Nengo Node), then you should be able to figure out if the non-determinism is coming from the Nengo network or the robot simulator. (I know you said it should be deterministic, but in this case I think they both should be deterministic, so it’s good to confirm.)
If it’s from the Nengo network, then the next thing to do would be to try to track whether there’s a particular object in the network that’s causing the non-determinism (e.g. a particular Ensemble or Connection). Again, more probes will help here. Everything should get seeded by setting the network seed, but it’s possible there’s some bug or oversight there.