How to predict a single image with nengo_dl / simulator?


Sorry if this question has been asked before. I’ve been working with the nengo_dl MNIST classification example. As discussed in the tutorial, I’ve been able to train a SoftLIF nengo_dl network, then switch the parameters to LIF, resulting in a ~2% error rate.

My question now is how can I pass a single image to this network? I would like to read a single image from my camera and make a prediction using the NengoDL. (This is not necessarily handwritten digits).

I have the simulator:

  sim = nengo_dl.Simulator(, minibatch_size=1, unroll_simulation=10)

I see that sim.run_steps is good for feeding minibatches of images into the network, but this doesn’t seem appropriate for a single image. How can I get this to predict from a single image? Sorry if this is covered someplace, I wasn’t able to find this!


We can use the same method to feed in a single image as we would for a batch of images (effectively just using a batch size of one). Using the same variable names from the MNIST example, this would look something like

img = <load image>

# add batch/time dimensions (with size 1)
img = np.reshape(img, (1, 1, <number of pixels in image>))

# tile the image along the time dimension for however many timesteps we need
img = np.tile(img, (1, n_steps, 1))

# run the simulation and feed in our image
sim.run_steps(n_steps, input_feeds={inp: img})

# view the output results for the image


Thanks for your response, that’s very helpful.

I’ve been exploring an idea where I would like to learn an encoding for an image using nengo_dl, then use reinforcement learning to learn a behavior for a robot based on the observed input / encoding.

I’m still in the process of working through the examples that were provided in this discussion:

Do you have any other tips or tricks that you can suggest on doing reinforcement learning using an image as input?


You’ll probably want to look into deep reinforcement learning methods. Deep Q learning (DQN) would be a good place to start as it is essentially the same as normal TD learning, just with a few tricks to make learning more stable.