Yeah your analysis sounds correct for why you’re getting that error.
Unfortunately there isn’t at the moment, the Nengo simulation always runs within a while loop. However, we’re in the midst of a large rewrite of NengoDL for TensorFlow 2.0, and at the end of that process it should be possible to optionally run things with or without a while loop. That probably won’t be ready for a while though (on the order of 1-2 months), but you can follow along here TensorFlow 2.0 refactoring by drasmuss · Pull Request #95 · nengo/nengo-dl · GitHub.
I’m not sure if this will work for your particular use case or not, but one thing that NengoDL allows you to do is manually compute the error gradient outside the simulation, and then feed it in directly to the training process.
There aren’t any nicely written examples of this, unfortunately, the closest is this test here nengo-dl/nengo_dl/tests/test_simulator.py at main · nengo/nengo-dl · GitHub. But the basic idea is that you would manually compute the gradients you’re looking for like
with nengo_dl.Simulator(net) as sim:
gradients = tf.gradients(sim.tensor_graph.probe_arrays[y_probe_nofilt],
sim.tensor_graph.input_ph[x_node])
dydx = sim.sess.run(gradients, feed_dict=sim._fill_feed(
n_steps, data={x_node: xs_training_input}, training=True))
err_grad = ...
sim.train({x_node: xs_training_input, y_probe_nofilt: err_grad},
optimizer=..., objective={y_probe_nofilt: None})
So I’m not sure whether it’d be possible to compute that err_grad = ...
part for your use case. What you want there is d_err / d_y
(i.e. the derivative of your error function w.r.t. the output of the network), which hopefully you might be able to compute once you have that dydx
value?