Looking at your code, it looks like my initial hypothesis was correct. By including just the one extra connection, the build order of your Nengo model was changed enough to essentially mean that even with a fixed random seed, the M1 ensemble was initialized with different values. You can see this happening by printing out the neuron intercepts and biases like so:
model = generate()
with nengo_dl.Simulator(model) as sim:
Without the extra connection to the M1 ensemble, on my computer, the intercepts and biases are like so:
[-0.44059655 -0.17302467 -0.54588354 0.8080366 -0.94593287 0.44017956 ... -0.9925257 0.73869216 -0.43185237 -0.30000842]
[ 4.9618087 2.5034149 12.357793 -80.311424 6.4380517 ... -14.356472 13.857874 -26.95613 8.037182 5.277952 ]
And with the extra connection, they look like this:
[-0.5612832 -0.0298115 -0.45642552 -0.38809964 -0.7269939 -0.70368075 ... 0.15300462 0.37762672 -0.12352662 0.48028773]
[ 6.220283 1.1853896 7.1105046 8.285944 6.4681625 ... -28.992367 -1.669669 -19.141626 2.9659586 -8.224886 ]
Since the neurons in your M1 ensemble have different parameters with the connection included / excluded from your model, it changes the spiking pattern that you record.
Fortunately, fixing the code is straightforward. If you want both versions of the code to reflect the same spiking pattern, you want to seed the ensemble instead of the network (you can seed the network too, but this wouldn’t have any effect on that one seeded ensemble). As an example, if you do this:
with net, config:
net.M1 = nengo.Ensemble(n_neurons=100, dimensions=1, seed=0)
having the connection include and excluded in your model should produce identical results.
Generally, seeding a network is used in the scenario where you want to get reproduceable results assuming the network structure isn’t changed between each run. In your use case, since the network structure is modified between the two version, you’ll want to seed the specific objects that you don’t want changed, i.e, the ensembles (in this case).