Probe shows different values for ensemble.neurons than connection

I have an ensemble of neurons, net.M1.M1. The data generated by the probe nengo.Probe(net.M1.M1.neurons) contains values 0 and 1000 indicating whether a neuron spiked as expected for dt=1000. However, this output is totally different:

net.test_node = nengo.Node(output=lambda t, x: print(x), size_in=1000)
nengo.Connection(net.M1.M1.neurons, net.test_node)

These results also contain 0 but instead of 1000, there’s seemingly random numbers that change over time, for example 31.605751 or 221.49023. I believe the neurons that are nonzero match with the nonzero neurons in the probe, but I don’t understand what these numbers are. And I trained a model on probe data that now I am trying to inference with based on data passed into a function like the two lines above and of course the results aren’t good. I can explicitly set nonzero elements to 1000 but why does this happen?

Hi @Luciano,

I think the issue between the different values between the probe and the connection is because of the default synaptic filter that is applied to nengo.Connections. Doing:

nengo.Connection(a, b)

is the same as doing:

nengo.Connection(a, b, synapse=0.005)

whereas doing this:

nengo.Probe(a)

is the same as doing this:

nengo.Probe(a, synapse=None)

This being the case, if you want your nengo.Connection code to match the nengo.Probe code, you’ll want to do this:

net.test_node = nengo.Node(output=lambda t, x: print(x), size_in=1000)
nengo.Connection(net.M1.M1.neurons, net.test_node, synapse=None)
1 Like

Thanks, and one related thing I’m confused about: Why does adding that connection:

net.test_node = nengo.Node(output=lambda t, x: print(x), size_in=1000)
nengo.Connection(net.M1.M1.neurons, net.test_node, synapse=None)

affect the values of the net.M1.M1.neurons? I would think it would have no upstream impact, but I can see from my net.M1.M1.neurons probe that adding that connection changes the spiking of those neurons. Is there a way to not have the connection have upstream impacts? I tried setting learning_rule_type=None but that didn’t work.

As long as the node is not connected to any upstream ensembles (i.e., the output of your net.test_node should not be connected to anything), it should not impact the behaviour of your M1 ensemble. Although, I would have to look at your code to confirm this.

Assuming your network is using a random seed value to get deterministic results on each run, what you might be observing is the impact having differently ordered connections has on the overall model. Inserting a nengo.Connection rather than a nengo.Probe results in a slightly different operation graph built by Nengo, which can mean that your ensembles are initialized slightly differently resulting in slightly different behaviours.

If you could post some code that reproduces the behaviour you are observing, I can analyze it further to see what exactly is going on. :slight_smile:

Here’s a minimal example:

import nengo
import nengo_dl
import numpy as np

def generate(net=None):
    config = nengo.Config(nengo.Connection, nengo.Ensemble)
    net = nengo.Network(seed=0)
    with net, config:
        net.M1 = nengo.Ensemble(n_neurons=100, dimensions=1)
        sin = nengo.Node(output=np.sin)
        nengo.Connection(sin, net.M1)

        net.test_node = nengo.Node(output=lambda t, x: print('test'), size_in=100)
        #nengo.Connection(net.M1.neurons, net.test_node, synapse=None, learning_rule_type=None)
        net.m1_probe = nengo.Probe(net.M1.neurons)

    return net

if __name__ == '__main__':
    model = generate()

    with nengo_dl.Simulator(model) as sim:
        sim.run(0.003)
        m1_data = sim.data[model.m1_probe]
        print(m1_data)

You’ll see the print at the end of the simulation shows different data if you uncomment the commented connection.

The Explanation
Looking at your code, it looks like my initial hypothesis was correct. By including just the one extra connection, the build order of your Nengo model was changed enough to essentially mean that even with a fixed random seed, the M1 ensemble was initialized with different values. You can see this happening by printing out the neuron intercepts and biases like so:

model = generate()
with nengo_dl.Simulator(model) as sim:
    print(sim.model.params[model.M1].intercepts)
    print(sim.model.params[model.M1].bias)
    ...

Without the extra connection to the M1 ensemble, on my computer, the intercepts and biases are like so:

[-0.44059655 -0.17302467 -0.54588354  0.8080366  -0.94593287  0.44017956 ... -0.9925257   0.73869216 -0.43185237 -0.30000842]
[  4.9618087    2.5034149   12.357793   -80.311424     6.4380517 ... -14.356472    13.857874   -26.95613      8.037182     5.277952  ]

And with the extra connection, they look like this:

[-0.5612832  -0.0298115  -0.45642552 -0.38809964 -0.7269939  -0.70368075 ... 0.15300462  0.37762672 -0.12352662  0.48028773]
[  6.220283     1.1853896    7.1105046    8.285944     6.4681625 ... -28.992367    -1.669669   -19.141626     2.9659586   -8.224886  ]

Since the neurons in your M1 ensemble have different parameters with the connection included / excluded from your model, it changes the spiking pattern that you record.

The Fix
Fortunately, fixing the code is straightforward. If you want both versions of the code to reflect the same spiking pattern, you want to seed the ensemble instead of the network (you can seed the network too, but this wouldn’t have any effect on that one seeded ensemble). As an example, if you do this:

with net, config:
    net.M1 = nengo.Ensemble(n_neurons=100, dimensions=1, seed=0)

having the connection include and excluded in your model should produce identical results. :smiley:

RNG Seeding
Generally, seeding a network is used in the scenario where you want to get reproduceable results assuming the network structure isn’t changed between each run. In your use case, since the network structure is modified between the two version, you’ll want to seed the specific objects that you don’t want changed, i.e, the ensembles (in this case).

1 Like