3-Band Imagery examples?

I was curious whether anyone has any code snippets of examples of working with 3 band images for classification. I’ve successfully repurposed the MNIST-Spiking example for other datasets, but had to convert the images to greyscale to properly load them. I am particularly interested in how to: format the data, configure the nengo input_node, and adding the time dimension to the data.

If I understand it correctly, the MNIST-spiking example is storing and feeding the image data as an array, and adding a time dimension to the array. But how should this be changed for 3 band imagery? Right now, the data is an array of arrays that contain 3 values for R,G,B channels. This is in contrast to just an array of single values like in the MNIST examples. Maybe an array of 3 large arrays, each representing a color channel, or an large array of arrays for each pixel value, containing 3 values?

There shouldn’t be any significant differences between RGB and grey-scale inputs, it’s essentially just changing the number of input dimensions.

For example, if we look at this example, our images are 28x28 pixels with one channel. So our input arrays had shape (n_images, n_steps, 28*28*1), and our input node had size_out=28*28*1 (inp = nengo.Node([0] * 28 * 28)). And our first convolutional layer has shape_in=(28, 28, 1).

So if instead our images were, for example, 28x28 pixels with 3 channels, then our input arrays would have shape (n_images, n_steps, 28*28*3), our input node would look like inp = nengo.Node([0] * 28 * 28 * 3), and our first convolutional layer would have shape_in=(28, 28, 3).

Note that you will need to pay attention to whether the channel dimension comes first or last in your images (i.e., are they (28, 28, 3) or (3, 28, 28)). Either one works, you just need to make sure you’re setting the data_format parameter of tf.layers.conv2d appropriately (see https://www.tensorflow.org/api_docs/python/tf/layers/conv2d).

1 Like

Sounds good, glad to hear that 3 band imagery can be handled very similarly! I appreciate the reply!