Nengo DL sim.fit() and sim.evaluate() do not store probed outputs

Hi all,

I am currently working with the Nengo DL library and would like to monitor model behaviour during training and evaluation. In particular, I am trying to access probed model output via sim.data[probe] after calling sim.fit(...) or sim.evaluate(...) with sim being a nengo_dl.Simulator object. However, sim.data[probe] always returns an empty list.

As a workaround, after calling sim.evaluate(...), I currently loop over the evaluation dataset a second time while calling sim.run(...) with a single nengo_dl.Simulator for each batch and collect the output of sim.data[probe] in a list.

I have made a more elaborate example of this very issue available here.

While the above is a fairly quick workaround, it remains a bit silly to run the same data through the same model twice - once to calculate a loss (and other metrics) with sim.evaluate() and again with sim.run() to access to model output for plotting. Is there no way to have the sim.data to (optionally) collect the probe outputs when calling sim.evaluate() or even sim.fit()?

I don’t think it’s possible to get probe values out of sim.evaluate or sim.fit. The reason for this is that these functions are using the respective Keras Model functions under the hood. I don’t think either tf.keras.Model.fit or tf.keras.Model.evaluate let you get the model outputs, so I don’t think it’s possible to have our functions return those either without some significant changes.

If you want to minimize computation, I think the easiest solution is to call sim.predict to get all the probed values, and then compute your metrics yourself.

Alternatively, it is possible to modify the train_step function for tf.keras.Model; you could have it store the model outputs somewhere or perhaps even have it return them as part of the metrics dictionary. You’d then have to patch this into NengoDL somehow, and there might also be some changes required to other NengoDL internals, but I’m not sure.

Finally, I’d encourage you to question exactly what you need the outputs for. You can actually do quite a bit with custom metrics. In this example, we define a custom metric called rate_metric and use that to get the 99th-percentile firing rates for each layer. For that matter, it’s probably possible to have a metric that stores the last values it receives or even writes many values sequentially to a buffer, essentially creating a metric that acts like a probe (you then just have to keep a handle to that metric around, and access those variables on the metric to get your outputs).

Hi @Eric,

Thank you for your response. I think you are right here, I have looked a bit deeper into the way Keras handles things here and it is messy - too messy for me to actually invest time in. At some point, I might follow your suggestion and build a custom metric that may write info to a buffer in a probe-like manner. Should I get to do that, I would post the code here, but it is indeed currently not a priority.