A simple ANN-SNN conversion attemp

Hello there! I’m trying to perform a comparison between different SNN frameworks on the MNIST problem. My idea is quite simple: create an ANN and build an SNN with the same structure, but using a Poisson encoding for the input and LIF neurons. Then I train the ANN and use these pre-trained weights on the SNN to perform inference. I know that there are more advance ANN-SNN conversion methods, but this is just for testing purposes.

The ANN is defined by this Keras model:

tf.keras.Sequential(
    [
        tf.keras.layers.Input(shape=(28, 28), name="input"),
        tf.keras.layers.Flatten(name="flatten"),
        tf.keras.layers.Dense(units=128, activation='relu', use_bias=False, name="hidden"),
        tf.keras.layers.Dense(units=10, activation='softmax', use_bias=False, name="output")
    ]
)

I already know that NengoDL has its converter, but I don’t want to use it as I want to learn the internals of Nengo in order to build more complex models in the future. For data loading this is my code. Please note that I’m only using the testing data:

# load mnist data (do not normalize input)
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data()
# Flatten the images:
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))

# Load the weights of the pre-trained ANN:
weights = load_weights('model_weights.h5')

# Define the SNN neurons
n_input = 784
n_hidden = 128
n_out = 10
presentation_time = 350   # Timesteps to present an example to the network

# Nengo needs instances with 3-D shape: (num_instances, n_timesteps, n_features), so we need to add the time dimension
# In this case, we are going to present the image to the network for the presentation time, so we tile it for that
# amount of time:
X_test = np.tile(X_test[:, None, :], reps=(1, presentation_time, 1)) 

I think this code is fine. However, this is my first question:

  1. Data is not normalized in [-1,1] according to the default Ensemble’s radius value. Is this a problem? As far as I know, values outside this range cannot be represented when using the NEF.

Next, I build my network trying to replicate the keras one and loading the pre-trained weights. This is the code of the network:

# Define the network
with nengo.Network(seed=0) as net:
    net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])  # Max firing rate is 100Hz ??
    net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
    net.config[nengo.Connection].synapse = None
    net.config[nengo.Connection].solver = NoSolver()

    # Define the layers
    inp = nengo.Node(output=np.ones(n_input))

    # Define the poisson input transformation
    # We need a rate-based neuron to convert the input into poisson spikes. As the inputs are all positive, the ReLU
    # does not change the input value.
    poissonInput = nengo.Ensemble(n_neurons=n_input, dimensions=1,
                                  neuron_type=nengo.PoissonSpiking(nengo.RectifiedLinear(), amplitude=1))
    hidden = nengo.Ensemble(n_neurons=n_hidden, dimensions=1, neuron_type=nengo.LIF())
    out = nengo.Ensemble(n_neurons=n_out, dimensions=1, neuron_type=nengo.LIF())

    # Define the connections
    nengo.Connection(inp, poissonInput.neurons)
    inpToHidden = nengo.Connection(pre=poissonInput.neurons, post=hidden.neurons,
                                   transform=np.transpose(weights['inpToHidden']))
    hiddenToOut = nengo.Connection(pre=hidden.neurons, post=out.neurons,
                                   transform=np.transpose(weights['hiddenToOut']))
    # Add monitors (Probes)
    out_probe = nengo.Probe(out.neurons, synapse=0.1)

In this code, I have some questions:

  1. Is the definition of the Poisson encoding correct? I have tried with a Direct neuron, but it throws an error.
  2. Is the connection between inp and poissonInput.neurons is all-to-all or one-to-one? I need one-to-one mapping in this case.

Finally, once the network is defined. I use inference to test the performance. The method is also quite simple: firstly, run the simulation for the presentation time of a given instance, secondly I (tried) to count the number of spikes of each output neuron during the presentation time. Then, the classification value is the neuron with the maximum number of spikes. Finally, I reset the network to present the next instance. This is a slow method, but it is for replicate the approach of other frameworks. Here is how it is done:

with nengo_dl.Simulator(net) as sim:
    y_preds = np.zeros(X_test.shape[0])
    for i in range(X_test.shape[0]):
        sim.run_steps(presentation_time, data={inp: X_test[i][None, :, :]}, progress_bar=False)
        y_preds[i] = np.argmax(np.sum(sim.data[out_probe] > 0, axis=0))
        sim.reset()

    acc = np.sum(y_preds == y_test) / len(y_test)
    print("Accuracy: ", acc)

Using this method, I surprisingly achieve an accuracy of approximately 0.45. This is extremely low. In other frameworks, using the same approach, the model achieved up to 0.97 accuracy. Any ideas of what is wrong? I guess that I’m not correctly counting the spikes as I noticed that the differences between the output neurons are almost negligible. I have also tried to do np.sum(sim.data[out_probe]) and average. In any case, the results are the same. To my knowledge, the output of sim.data[out_probe]for each timestep is the “instantaneous” firing rate, so apparently the neuron with the highest average firing rate throughout all the timesteps is the final predicted value, but this does not work as I expected. I really appreciate any clue about what I’m doing wrong.

Below, I attached the load_weights function and the model weights are available at: https://drive.google.com/file/d/1Be9DIvtmwuiXYpgHB_xMuYOwLcHHfrEz/view?usp=sharing if they are neccesary to replicate my work.

Cheers!

utils.py (508 Bytes)