Writing a three-dense-layers model with Nengo-Loihi

Hello,

I need to rewrite a model from Keras to Nengo-Loihi and run it with a backend on the chip (nengo_loihi.Simulator). The model is a simple three-dense-layer network, but it’s not possible to import it directly from Keras to Loihi. I couldn’t find any examples of dense layers written in Nengo. Would it be possible to implement it? I would appreciate any suggestions and I attach a code snippet below.

dropoutRate = 0.25

inputArray = Input(shape=(input_shape,))
x = Dense(64, activation='relu')(inputArray)
x = Dropout(dropoutRate)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(dropoutRate)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(dropoutRate)(x)

output = Dense(5, activation='softmax')(x)
model = Model(inputs=inputArray, outputs=output)

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Best,
Bartek

The equivalent of a “dense” connection in Nengo is something that we call a “direct connection” in our core documentation: https://www.nengo.ai/nengo/connections.html#direct-connections or a “neuron to neuron connection” in our Loihi documentation: https://www.nengo.ai/nengo-loihi/examples/neuron_to_neuron.html. These are fully-connected matrices from one layer (ens1.neurons) to another (ens2.neurons).

An example of this in the context of a deep learning Loihi model is available here: https://www.nengo.ai/nengo-loihi/examples/keyword_spotting.html. Note that the connections are of the form nengo.Connection(layer_x.neurons, layer_y.neurons, ...). This example works by taking the pretrained model parameters from NengoDL and then directly mapping them onto the connection weights of a Nengo-Loihi model. I think that you would want to do something similar by extracting the weights from your trained Keras model and importing them directly into an equivalent spiking model. Note that there is no softmax in Loihi, so you would either want to approximate this in another way, or as an offline post-processing step.

For your reference there is another good example here: https://www.nengo.ai/nengo-loihi/examples/mnist_convnet.html, which performs the training directly on the model without going through this extra step of converting weights from one model format to another.

Hi Aaron,
Thank you for the answer. I wrote the following code:

with nengo.Network(label="Jet classification") as model:
    nengo_loihi.add_params(model)
    model.config[nengo.Connection].synapse = None

    neuron_type = nengo.LIF(tau_rc=0.02, tau_ref=0.001, amplitude=0.005)

    inp = nengo.Node(np.zeros(n_inputs), label="in")
    out = nengo.Node(size_in=5)

    layer_1 = nengo.Ensemble(n_neurons=64, dimensions=1, neuron_type=neuron_type, label="Layer 1")
    model.config[layer_1].on_chip = False
    nengo.Connection(inp, layer_1.neurons, transform=nengo_dl.dists.Glorot())

    layer_2 = nengo.Ensemble(n_neurons=32, dimensions=1, neuron_type=neuron_type, label="Layer 2")
    nengo.Connection(layer_1.neurons, layer_2.neurons, transform=nengo_dl.dists.Glorot())

    layer_3 = nengo.Ensemble(n_neurons=32, dimensions=1, neuron_type=neuron_type, label="Layer 3")
    nengo.Connection(layer_2.neurons, layer_3.neurons, transform=nengo_dl.dists.Glorot())

    nengo.Connection(layer_3.neurons, out, transform=nengo_dl.dists.Glorot())

    out_p = nengo.Probe(out)
    out_p_filt = nengo.Probe(out, synapse=nengo.Alpha(0.01))

I was wondeing if it’s possible to use tf.nn.softmax_cross_entropy_with_logits_v2 just like in a convolutional example. The training would look as follows:

with nengo_dl.Simulator(model, minibatch_size=minibatch_size, seed=0) as sim:
    sim.train(train_data, tf.train.RMSPropOptimizer(learning_rate=0.001), objective={out_p: crossentropy}, n_epochs=10)

I’m able to run the training and deploy the results to nengo-loihi, but can’t get lower error than 47%. I’m not sure if this is correct and would appreciate a comment on the idea.

Hi Bartek,

I don’t see any obvious errors in your code. Is 47% the training error when you train with nengo_dl, or the testing error when you test with nengo_loihi?

How well does your Keras model do on the task? How bad is 47% (i.e. is it a binary decision task, where 47% is about chance, or did it actually learn something)?

One thing you can try is breaking down the problem. Start with a model in nengo_dl that is as close as possible to your Keras model. Specifically, use nengo.RectifiedLinear neurons. You also might want to try making all your neurons the same at the start, by putting something like:

net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([200])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])

I’ve chosen a max_rate of 200 here, because this is the inverse of your neuron amplitude of 0.005, so when combined should give you neurons that have an output of 1 when the input is 1 (i.e. they don’t affect the scale of the input).

If things do not train well in nengo_dl, then there’s some problem with how you set up your model that we’re missing (possibly some weird scaling going on somewhere). If it does train well, then you can try switching to LIFRate neurons (in nengo_dl) and see if that works, and then try running in spiking LIF neurons in nengo_dl, and finally try it on Loihi.

Hi Eric,
An error of 47% was achieved with nengo_dl but simulator with nengo_loihi shows 84% of error (not sure why yet). A task is to classify 5 types of particles for high energy physics and the best results got with this Keras model were ~74% of accuracy, so it actually learned something with nengo_dl.
Thank you for all suggestions. I’ll investigate the problem and try to play with different settings that you described. Good to know that idea is correct itself, probably I can just adjust the model better.