NengoDL MNIST tutorial: some questions about Conv2D TensorNode

Hello, I’m trying to understand the detailed mechanism of spiking CNN.
(reference: NengoDL MNIST tutorial)

This is a part of the code:

neuron_type = nengo.LIF(amplitude=0.01)


# add the third convolutional layer
conv3 = nengo_dl.Layer(tf.keras.layers.Conv2D(filters=128, strides=2, kernel_size=3))(
    conv2, shape_in=(12, 12, 64)
conv3_lif = nengo_dl.Layer(neuron_type)(conv3)


conv3_p = nengo.Probe(conv3, label="conv3_p")
conv3_p_filt = nengo.Probe(conv3, synapse=0.1, label="conv3_p_filt")

conv3_lif_p = nengo.Probe(conv3_lif, label="conv3_lif_p")
conv3_lif_p_filt = nengo.Probe(conv3_lif, synapse=0.1, label="conv3_lif_p_filt")

There are two nengo_dl.Layer objects in the third convolutional layer, and I probed data from each object for 30 timesteps with/without synaptic filtering. I got the following results.

[All situations are in the inference phase, not training phase]

  1. From what I understand, LIF layer adds nonlinearity to the model and acts as a spiking version of activation function instead of non-spiking ReLU function. The filtered output from the Conv2D layer (2) becomes input to the LIF layer, inducing spikes of LIF neurons (3),(4)… But I have no idea how to interpret the output from Conv2D layer. Is Conv2D layer a group of spiking neurons? Where is the ‘synapse’ at which synaptic filtering takes place?

  2. The final output of the second convolutional layer conv2_lif_p_filt would be similar to (4), then how does the third Conv2D layer work? Does it apply convolution directly to the conv2_lif_p_filt data with learned filters?

  3. I also probed data from the first convolutional layer, and the output of Conv2D layer looks like the following. (left: without synaptic filtering (conv1_p), right: with filtering (conv1_p_filt))

    Is input image represented as an array of constant real values, similar to non-spiking CNN models?

Thank you in advance. Any references about details (such as synaptic filtering) are greatly welcomed.

Hi @neuroshin, I’ll try to address your questions below:

Yup. This is correct.

The NengoDL MNIST tutorial notebook doesn’t actually use synapses between each connection. This code:

net.config[nengo.Connection].synapse = None

sets the synapse value for all of the connections in the model to None, which removes the synapse (and the filtering) between each layer entirely. The synaptic filter is applied to the connections between each layer, so if you set this value to a value that is not None or 0, this synapse will be applied to the connections between every layer. You can specify the synapse on individual connections by using the synapse parameter when making a connection. As an example:

x1 = nengo_dl.Layer(...)
x2 = nengo_dl.Layer(neuron_type)(x1, synapse=0.01)

would create a connection from x1 to x2 with a synaptic filter of 0.01s.

The synaptic filtering for your conv3_p_filt and conv3_lif_p_filt probes are being applied on the probes themselves, and don’t affect the connections between each layer. We typically put a filter on a probe to smooth out noise introduced by the spiking behaviour to make trends in the data more noticeable. The default value (i.e., if you don’t specify a value) for the synapse on Probes is None.

The keras Conv2D layer applies what is essentially a linear transformation (the convolution of a 2D filter – note, this is not the synaptic filter) to a signal. In Nengo, this would be sort of like the transform parameter (but in 2D) on a nengo.Connection call. Thus, the Conv2D layer is not a group of spiking neurons, just a linear transformation (bunch of matrix multiplications) operation.

The Conv2D layers (transforms) are applied directly to the output of the preceding neuron population. Since the connection synapses for the entire model is set to None, the convolution operation is performed on the spiking output of each neuron layer.

Sort of. Tensorflow (which NengoDL calls in the background) can operate on multi-dimensional signals. In the MNIST example, the input to the network is a 2D array of constant real values representing the input MNIST image. What you see in the probed data is a 1D flattened representation of this 2D matrix of values.


Thank you very much for the detailed answer! It was really helpful :slightly_smiling_face:

I understood that the synaptic filtering applied to neural connections and that applied to probes are independent attributes and defined in different parts of the code, but I am a little confused.
Are the concepts of these two synaptic filtering methods identical? I have roughly understood that synaptic filtering of probes reduces noise and accumulates spike data… It seems that I’m not familiar with synaptic filtering.

In Nengo, both the synapse on the Probe and the synapse on the Connection filter the signal use a synapse model (and use the same synapse code in Nengo) to perform the smoothing / filtering. By default, the synapse model is the lowpass synapse which functions like a lowpass filter.

However, there are other synapse models available to use. It really depends on your use case.

As a side note, a synapse can be thought of as applying a linear transformation (i.e., applying a filter) to a continuous time signal (e.g., a spike trai)n. For the lowpass synapse, it convolves the signal with an exponential filter response, and this results in a lowpass-filtered-smoothed version of the original signal.

1 Like

Good… I could get a little sense about synaptic filtering.
In this model, synaptic filtering is applied only to probes, then is it correct to think that spike data without synapse filtering (such as conv3_p, conv3_lif_p) is actually transmitted between layers?

(According to your answer, it seems to be correct…)
Sorry if there were too many questions :confused:

That is correct! :smiley:

No need to apologize! The more questions, the better. :smile:

1 Like

Hello @neuroshin, you might possibly find this topic interesting.

Thank you for sharing interesting topic! I will take a look at it.