Hi Nengo team!
I have been poring over the nengo_loihi documentation lately, as well as the examples (namely Convolutional networks). I am hoping to get a better understanding of how the conv_layer function works and other related aspects of spiking CNNs on Loihi. The code looks so simple, but I’ve been struggling to grasp everything that is happening in a pseudocode/layman’s terms type of way. Specifically, my questions are:
- How does
layer = nengo.Ensemble(conv.output_shape.size, 1).neuronsimplement the activation function? (And what activation function is it?)
- What is happening in the bolded section of the first convolution layer in the for loop below?
layer, conv = conv_layer(inp, 1, input_shape, kernel_size=(1,1), init=np.ones((1,1,1,1)))
np.ones((1,1,1,1)) == array([[[[1.]]]])Is this the actual kernel that’s convolving (correlating) over the image, one pixel at a time, from inp to an ensemble of equivalent size?
- Any specific reason for choosing 1, 6, and 24 filters for each layer in the spiking CNN? Or did they just give good results for this MNIST classification example?
- Can pooling (avg./max) be implemented on Loihi, and if so, how? I noticed in nengo_extras that there is a ConvLayer and PoolLayer class, but those seem to be outdated/relevant to Keras/incompatible with Loihi?
- Finally, I found 2 discrepancies between the MNIST dataset provided by
urlretrieve('http://deeplearning.net/data/mnist/mnist.pkl.gz', 'mnist.pkl.gz')and the one provided by
keras.datasets.mnist.load_data(). The pickle file seems to only contain 50000 training examples, rather than 60000, and the maximum pixel value from the pickle file is not 1.0, but is actually 0.99609375. This is not an important question, but I was just wondering why the datasets are different?
I believe those are all the questions I have for now. I hope I’m not asking too much of the Neng’ gang.
Thanks in advance, and happy Friday!