Implementing a spiking convolutional autoencoder with Nengo

Hello! I’ve been doing some work involving SNNs and came across Nengo as a tool to implement them. So far it has been the best simulator that I have found so I have been learning to use it the past couple of days.
I’ve been trying to implement a convolutional autoencoder like the one seen here: DataTechNotes: Convolutional Autoencoder Example with Keras in Python but I am having some trouble. I’m able to convert the keras model using the converter get the same results as in keras but my network has some issues when it is converted to a spiking network.

Here is the network I am converting (max pooling was replaced with conv2d with stride of 2 to make up for the fact that there is no max pooling in nengo):

input_img = Input(shape=(28, 28, 1))
conv1 = Conv2D(12, (3, 3), activation=tf.nn.relu, padding='same')(input_img)
max1_conv = Conv2D(12, (3, 3), strides=2, activation=tf.nn.relu, padding='same')(conv1)
conv2 = Conv2D(8, (3, 3), activation=tf.nn.relu, padding='same')(max1_conv)
encoded = Conv2D(8, (3, 3), strides=2, activation=tf.nn.relu, padding='same')(conv2)

conv3 = Conv2D(8, (3, 3), activation=tf.nn.relu, padding='same')(encoded)
up1 = UpSampling2D((2, 2))(conv3)
conv4 = Conv2D(12, (3, 3), activation=tf.nn.relu, padding='same')(up1)
up2 = UpSampling2D((2, 2))(conv4)
decoded = Conv2D(1, (3, 3), activation=tf.nn.sigmoid, padding='same')(up2)
autoencoder = Model(input_img, decoded)

Then I follow the keras to snn example to convert and train the network:

converter = nengo_dl.Converter(autoencoder)
with nengo_dl.Simulator(converter.net, minibatch_size=200, seed=0) as sim:
    sim.compile(
        optimizer='rmsprop',
        loss={nengo_output: tf.losses.binary_crossentropy},
        metrics=tf.metrics.binary_accuracy,
    )
    sim.fit(
        {converter.inputs[input_img]: train_images},
        {converter.outputs[decoded]: train_images},
        validation_data=(
            {converter.inputs[input_img]: test_images},
            {converter.outputs[decoded]: test_images}
        ),
    ),
    sim.save_params('./AE_params')

Finally, I use the run_network function like in the example to run the spiking network:

def run_network(
    activation,
    params_file='AE_params',
    n_steps=30,
    scale=1,
    synapse=None,
    n_test=400,
):

    # convert keras model to nengo network
    nengo_converter = nengo_dl.Converter(
        autoencoder, 
        swap_activations={tf.nn.relu: activation},
        scale_firing_rates=scale, synapse=synapse,
    )

    net = nengo_converter.net

    # get i/o objects
    nengo_input = nengo_converter.inputs[input_img]
    nengo_output = nengo_converter.outputs[decoded]

    # repeat inputs for num timesteps
    tiled_test_images = np.tile(test_images[:n_test], (1, n_steps, 1))

    # options to speed sim
    with nengo_converter.net:
        nengo_dl.configure_settings(stateful=False)

    # build network, load weights, run inference on test images
    with nengo_dl.Simulator(net, minibatch_size=50, seed=0) as nengo_sim:
        nengo_sim.load_params(params_file)
        data = nengo_sim.predict({nengo_input: tiled_test_images})

        # plot images comparison
        imgs = data[nengo_output]
        n = 5
        for i in range(n):
            ax = plt.subplot(2, n, i+1)
            plt.imshow(np.reshape(test_images[i], (28, 28)))
            ax.get_xaxis().set_visible(False)
            ax.get_yaxis().set_visible(False)

            ax = plt.subplot(2, n, n+i+1)
            plt.imshow(imgs[i, n_steps-1].reshape((28, 28)))
            ax.get_xaxis().set_visible(False)
            ax.get_yaxis().set_visible(False)
        plt.show()

run_network(activation=nengo.SpikingRectifiedLinear(), synapse=0.001)

Once everything is finished, I’m left with this output:
Figure_1

I’ve tried optimizing the network like in the examples by synaptic smoothing, fire rate scaling, and fire rate regularization but each of them results in something that looks like this.

Does anyone have any suggestions on how I would be able to properly convert the convolutional autoencoder into its spiking equivalent?

Hello! Welcome to the community!

I had a chance to try out implementing this. Nothing is jumping out at me at first glance with your approach. By setting scale_firing_rates sufficiently high (2000) I was able to get the target output. I’d recommend recording the activity of the neurons from each of the layers and plotting that, to get a sense of what the average firing rate for each layer is. I suspect that you could tune the scale_firing_rate parameter for each layer and drop it down a fair bit for most of them.

You can also try adding in batch normalization between layers, which helps keep the magnitude of the input to each layer consistent and can make conversion to spiking neurons smoother. Another thing you can do is introduce noise in the network (I think Keras has Noise layers that are easy to include) during training and that can also improve spiking performance.

I’ve attached the notebook, please let me know if it doesn’t work for you or any other questions! (note: for the notebook i’m using TF 2.4.1, the dev branch of nengo-dl, and nengo 3.1.0)

AE spiking.ipynb (61.6 KB)

Thanks for the reply. I just saw this but I went ahead and did it a different way. I followed the optimizing SNN example here: Optimizing a spiking neural network — NengoDL 3.4.3.dev0 docs and was able to tweak some things. I changed the LIF neuron to just the spiking rectified neuron and changed the amplitude for it to 1/500. I also changed the out_p_filt synapse to 0.1 and was able to get some alright results:
|504px;x296px;

However, I will go ahead and try to implement the things you said that could help. Hopefully it won’t be too hard but if I get stuck or anything I may come back and try to share some more results that I find.

1 Like