Access probes during the simulation (using NengoDL)

Hello everyone,
I am trying to use NengoDL as backend and wanted to collect some data from the simulation (for analysis purposes), I know that in Nengo the probes are stored in memory, which may cause a problem when running long simulations and can’t preserve all the data, that’s why I used an additional Node to store the Probes somewhere else and preserve only the last entry during the simulation, using the self.sim._sim_data[probe] when using Nengo core, and it works great!

I tried the same thing when switching to NengoDL, by changing self.sim._sim_data[probe] to[probe] but I am getting an empty array, so my guesses are:

  • The probes are stored maybe somewhere else (TensorFlow side ?)
  • The probes are stored at the end of the simulation and no way to access it while simulating (I hope not :roll_eyes:)

I hope someone can clarify this for me.

Here is the network creation part:

with model:

# input layer 
picture = nengo.Node(PresentInputWithPause(image_train_filtered, presentation_time,pause_time))
input_layer = nengo.Ensemble(
input_conn = nengo.Connection(picture,input_layer.neurons)

# weights randomly initiated 
layer1_weights = random.random((n_neurons, 784))
# define first layer
layer1 = nengo.Ensemble(

conn1 = nengo.Connection(

# create inhibitory layer 
inhib_wegihts = (np.full((n_neurons, n_neurons), 1) - np.eye(n_neurons)) * (- 2)

inhib = nengo.Connection(

# setup the probes

connection_layer1_probe = nengo.Probe(conn1,"weights",label="layer1_synapses") # ('output', 'input', 'weights')

# log = instance of the class where Probe is stored and only last entry is preserved

with nengo_dl.Simulator(model,dt=0.005) as sim:

log.set(sim) + pause_time) * label_train_filtered.shape[0])

Thank you :smiley:

Hi @Timodz

The sim._sim_data in nengo.Simulator is a reference (short-cut) to the sim.model.params object. NengoDL doesn’t create the same reference in nengo_dl.Simulator, but the sim.model.params object does still exist. Thus, to achieve the same functionality in NengoDL, instead of accessing sim._sim_data, you’ll want to use sim.model.params instead.

Here’s some code to demonstrate both approaches using the Nengo simulator and the NengoDL simulator:

import nengo
import nengo_dl as dl
import numpy as np

with nengo.Network() as model:
    inp = nengo.Node(lambda t: np.sin(t * np.pi * 5))
    p_in = nengo.Probe(inp)

print("===== NENGO =====")

with nengo.Simulator(model) as sim:
    for i in range(10):
        print(">>", sim._sim_data[p_in])
        print("<<", sim.model.params[p_in])

print("===== NENGODL =====")

with dl.Simulator(model) as sim:
    for i in range(10):
        print(">>", sim.model.params[p_in])

You’ll notice in the code above that:

  • In the Nengo simulator, sim._sim_data and sim.model.params return the same data.
  • In both the Nengo and NengoDL simulators, sim.model.params work the same way.

Thank you @xchoo for your reply, yes indeed the example you provide works.

But when I tried to access the probe using sim.model.params and this way of running the simulation :

with nengo_dl.Simulator(model,dt=0.005) as sim:
     log.set(sim) + pause_time) * label_train_filtered.shape[0])

it still shows an empty List. When I used a for loop to run the simulation step by step it did work.

Can i use the for loop and replace the 10 by simulation_step instead of in order to run the simulation since this way i can access the probes ? if yes, any consequences on the performances by using this way ?

simulation_step = ((presentation_time + pause_time) / dt) * label_train_filtered.shape[0]

Yes, you should be able to replace 10 with simulation_time / dt to get the equivalent of

In standard Nengo, I can confidently say that there shouldn’t be a performance difference between the two approaches. With NengoDL though, I am not as confident. Looking at the NengoDL code base, if you are using the NengoDL simulator like the Nengo simulator (i.e., you aren’t using it to do any batch runs, or have a non-default minibatch_size parameter), then it should have no performance difference. Otherwise, running may be faster than doing the for loop. I’ll have to double check with the author of NengoDL, but he’s on holiday and won’t be back until the new year.

If you do want to do batch runs, and think that there may be a performance impact, then you might have to modify the logging of data to use the Nengo process approach as I outlined in this post (see The more Nengo way).

Ok, thank you.