How to use a 2D image array as input?

Probably a trivial question, but I’ve been unable to figure out how to use a Nengo Node to feed an image (or a different image for each timestep) to the network. Code looks something like this

def image(t):
return np.random.randn((32,32))

with nengo.Network() as model:
N1 = nengo.Ensemble(32, dimensions=2)
input_ = nengo.Node(image)

I get the error “Node output must be a vector (got shape (32,32)”

If I try and flatten the image in the image() function, I get “function output size is incorrect; should return vector of size 1”.

How can I resolve this?

Thanks a lot!

You’re right that you need to flatten the image. The second error is probably because you were connecting to N1 (which uses the encoded vector space, where you’d set dimensions=1), rather than the neuron space (N1.neurons). Something like this should work:

def image(t):
    return np.random.randn((32,32)).flatten()

with nengo.Network() as model:
    N1 = nengo.Ensemble(1024, dimensions=1) 
    input_ = nengo.Node(image)
    nengo.Connection(input_, N1.neurons)

Note that the dimensions=1 argument doesn’t really matter here, since we’re connecting directly to the neurons.

Thanks a lot, it works! Is it possible to use a 2D image as input in general, i.e. without flattening it?

Not at the moment, in Nengo all values must be vectors. But it’s something we’re thinking about adding to the API!

Great, thanks!