Writing a three-dense-layers model with Nengo-Loihi

Hello,

I need to rewrite a model from Keras to Nengo-Loihi and run it with a backend on the chip (nengo_loihi.Simulator). The model is a simple three-dense-layer network, but it’s not possible to import it directly from Keras to Loihi. I couldn’t find any examples of dense layers written in Nengo. Would it be possible to implement it? I would appreciate any suggestions and I attach a code snippet below.

dropoutRate = 0.25

inputArray = Input(shape=(input_shape,))
x = Dense(64, activation='relu')(inputArray)
x = Dropout(dropoutRate)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(dropoutRate)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(dropoutRate)(x)

output = Dense(5, activation='softmax')(x)
model = Model(inputs=inputArray, outputs=output)

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Best,
Bartek

The equivalent of a “dense” connection in Nengo is something that we call a “direct connection” in our core documentation: https://www.nengo.ai/nengo/connections.html#direct-connections or a “neuron to neuron connection” in our Loihi documentation: https://www.nengo.ai/nengo-loihi/examples/neuron_to_neuron.html. These are fully-connected matrices from one layer (ens1.neurons) to another (ens2.neurons).

An example of this in the context of a deep learning Loihi model is available here: https://www.nengo.ai/nengo-loihi/examples/keyword_spotting.html. Note that the connections are of the form nengo.Connection(layer_x.neurons, layer_y.neurons, ...). This example works by taking the pretrained model parameters from NengoDL and then directly mapping them onto the connection weights of a Nengo-Loihi model. I think that you would want to do something similar by extracting the weights from your trained Keras model and importing them directly into an equivalent spiking model. Note that there is no softmax in Loihi, so you would either want to approximate this in another way, or as an offline post-processing step.

For your reference there is another good example here: https://www.nengo.ai/nengo-loihi/examples/mnist_convnet.html, which performs the training directly on the model without going through this extra step of converting weights from one model format to another.

Hi Aaron,
Thank you for the answer. I wrote the following code:

with nengo.Network(label="Jet classification") as model:
    nengo_loihi.add_params(model)
    model.config[nengo.Connection].synapse = None

    neuron_type = nengo.LIF(tau_rc=0.02, tau_ref=0.001, amplitude=0.005)

    inp = nengo.Node(np.zeros(n_inputs), label="in")
    out = nengo.Node(size_in=5)

    layer_1 = nengo.Ensemble(n_neurons=64, dimensions=1, neuron_type=neuron_type, label="Layer 1")
    model.config[layer_1].on_chip = False
    nengo.Connection(inp, layer_1.neurons, transform=nengo_dl.dists.Glorot())

    layer_2 = nengo.Ensemble(n_neurons=32, dimensions=1, neuron_type=neuron_type, label="Layer 2")
    nengo.Connection(layer_1.neurons, layer_2.neurons, transform=nengo_dl.dists.Glorot())

    layer_3 = nengo.Ensemble(n_neurons=32, dimensions=1, neuron_type=neuron_type, label="Layer 3")
    nengo.Connection(layer_2.neurons, layer_3.neurons, transform=nengo_dl.dists.Glorot())

    nengo.Connection(layer_3.neurons, out, transform=nengo_dl.dists.Glorot())

    out_p = nengo.Probe(out)
    out_p_filt = nengo.Probe(out, synapse=nengo.Alpha(0.01))

I was wondeing if it’s possible to use tf.nn.softmax_cross_entropy_with_logits_v2 just like in a convolutional example. The training would look as follows:

with nengo_dl.Simulator(model, minibatch_size=minibatch_size, seed=0) as sim:
    sim.train(train_data, tf.train.RMSPropOptimizer(learning_rate=0.001), objective={out_p: crossentropy}, n_epochs=10)

I’m able to run the training and deploy the results to nengo-loihi, but can’t get lower error than 47%. I’m not sure if this is correct and would appreciate a comment on the idea.

Hi Bartek,

I don’t see any obvious errors in your code. Is 47% the training error when you train with nengo_dl, or the testing error when you test with nengo_loihi?

How well does your Keras model do on the task? How bad is 47% (i.e. is it a binary decision task, where 47% is about chance, or did it actually learn something)?

One thing you can try is breaking down the problem. Start with a model in nengo_dl that is as close as possible to your Keras model. Specifically, use nengo.RectifiedLinear neurons. You also might want to try making all your neurons the same at the start, by putting something like:

net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([200])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])

I’ve chosen a max_rate of 200 here, because this is the inverse of your neuron amplitude of 0.005, so when combined should give you neurons that have an output of 1 when the input is 1 (i.e. they don’t affect the scale of the input).

If things do not train well in nengo_dl, then there’s some problem with how you set up your model that we’re missing (possibly some weird scaling going on somewhere). If it does train well, then you can try switching to LIFRate neurons (in nengo_dl) and see if that works, and then try running in spiking LIF neurons in nengo_dl, and finally try it on Loihi.

Hi Eric,
An error of 47% was achieved with nengo_dl but simulator with nengo_loihi shows 84% of error (not sure why yet). A task is to classify 5 types of particles for high energy physics and the best results got with this Keras model were ~74% of accuracy, so it actually learned something with nengo_dl.
Thank you for all suggestions. I’ll investigate the problem and try to play with different settings that you described. Good to know that idea is correct itself, probably I can just adjust the model better.

Hello all,

I verified the code and everything seems correct. The model performs really well in the simulations with nengo_dl, but still I encounter the problem in nengo_loihi. I was able to get 57,43% of acc with nengo_dl and even 70,8% after adding the synaptic filters. This is already very close to the best results on this task with DNNs in Keras. I’ve added the second dimension to the data such as in the examples (https://www.nengo.ai/nengo-loihi/examples/mnist_convnet.html), but after longer discussion with Eric we were not sure if this output is actually running in spiking neurons. Before the simulation I repeat the test data with input/target for a number of timesteps, so in my understanding the results should be with spikes. I would be very thankful for clarifying this part.

I’ve also added Probes to the layers and surprisingly the number of activated spikes with nengo_loihi is always 0. Same happens with LIF and SpikingRectifiedLinear neurons. It seems like the simulation in nengo_loihi doesn’t work at all, while I get good results with nengo_dl (as I understand also with spiking neurons). What could be the reason?

My full code is available here:
https://github.com/Borzyszkowski/SNN-CMS/blob/nengo_model/Jet_SNN_model.py

I would really appereciate a quick review or comment on this. Thanks for helping!

Best,
Bartek

One problem I see is that you’re not actually providing any input data when you run it in nengo_loihi. In nengo_dl, when you call sim.train, you provide data as part of the train call that overrides your input node output. This is the same with sim.loss. This is why nengo_dl examples often just set nodes to output zeros, since these values are then overridden.

When you’re running in nengo_loihi, your simulation is going to be using the node as it was originally configured, i.e. outputting all zeros. This is why in the nengo_loihi MNIST example we use the PresentInput class to set up the node to present the test images each for a length of presentation_time. Look at where PresentInput is used in that example to see how you should be setting up your node.

2 Likes

Hi Eric,

Please forgive me a late answer. Indeed, the problem was an input data configuration. I knew it had to be something really obvious and eventually I was able to achieve up to 69,8% of acc on Loihi. Hopefully the results will be published what I could also share here later. Thank you very much for helping.

Bartek

1 Like