Ensemble CNN and SNN

My comment about the average pooling transforms is in regard to the comparison between the two notebooks you provided. You are correct in the understanding that if you do something like this:

    x = nengo_dl.Layer(tf.keras.layers.AveragePooling2D(pool_size=2, strides=2))(
        x,shape_in=(14,14,64))

your Nengo network will contain an average pooling layer. However, if you look at the network (net) in the Ensemble_SNN notebook, you’ll see that it only contains an input node, and two nengo.Ensemble layers (which are equivalent to the nengo_dl.Layer(neuron_type)(x) layers in the Ensemble_SNN1 notebook). Thus, the Ensemble_SNN network is missing the convolution layers, and the average pooling layers, which is probably why one network works, while the other doesn’t. If you want both notebooks to have the same network structure, you’ll need to implement both the convolution transforms (see the example notebook I posted in my previous reply), and the average pooling transform.

I’m not entirely sure what tradeoff you are asking about here, so I’m going to answer to the best of my understanding of your question. For spiking neural networks, part of the drop in accuracy you see (when compared to “conventional” neural networks) is due to the extra noise the spiking behaviour adds to the network. The noise is due to the fact that spiking neurons cannot represent a constant value. Rather, they can only produce values a certain frequency (whenever they produce a spike), and at all times when they are not spiking, their outputs are zero. To increase the accuracy of the network, one can introduce smoothing to the spike trains. Biologically, this is the post-synaptic current applied at synapses between each neuron. In Nengo, the smoothing is accomplished by using the synapse parameter on any nengo.Connection.

Alternatively, spike noise can be reduced by increasing the firing rate of the neurons. If your neurons can fire infinitely fast (or as fast as 1 spike per simulation timestep), they essentially become rate neurons. However, increasing spiking rates decreases the advantages of having spiking neurons in the first place, which is the reduction in energy usage (particularly on a chip like Loihi). Our Keras-to-SNN NengoDL example walks through these concepts using an example network.

I believe that this is because you have not specified the slicing of the x tensor properly. The first dimension of the tensor should be the minibatch size, so what you want to be slicing is the second dimension, like so:

x0 = x[:, :layer4.size_out]
x1 = x[:, layer4.size_out:]

However, once you have fixed that issue, there seems to be other issues with your network as well. From a quick glance, the layer input and output sizes don’t seem to match up properly, so you’ll need to fix that before the network can be run. This post is using a similar function (the Tensorflow add layer), and may help you.