Keras Model Converter and Neuron Type Not Supported Nengo_Loihi Simulator

Hi,

I used nengo_dl.Converter() to convert a Keras model using ReLU activations and also used swap_activations argument to swap all ReLUs with SpikingRectifiedLinear by swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()}. However, when I tried nengo_loihi.Simulator to simulate the converted model, I got the following error:

"The neuron type %r cannot be simulated on Loihi. Please either "
nengo.exceptions.BuildError: The neuron type %r cannot be simulated on Loihi. Please either switch to a supported neuron type like LIF or SpikingRectifiedLinear, or explicitly mark ensembles using this neuron type as off-chip with
net.config[ensembles].on_chip = False

Could you let me know how to solve this problem? Thank you very much.

Hi @Will_Cheung and welcome! It looks like there is some other activation that’s not supported, but we can’t see which one from the part of the error message that you’ve posted. Could you please post a more complete output? That should say which neuron type cannot be simulated. Thanks! Edit: Sorry I just noticed we’re missing the string format here. We will get back to you with some other suggestions as soon as we can.

One thing you could try is to inspect the converted model to see what neuron types it uses, e.g., via:

for ensemble in converter.net.ensembles:
    print(ensemble, ensemble.neuron_type)

assuming converter.net is the converted model. These print statements should indicate that one (or more) of the ensembles has a neuron_type other than SpikingRectifiedLinear. At that point you have three options: (1) configure that ensemble to run off chip (on your CPU, instead of on Loihi) by setting net.config[ensemble].on_chip = False, (2) swap that activation with one that is supported by including it in the swap_activations dictionary as well, or (3) register a builder function with the Nengo-Loihi builder to build it into the hardware using available Loihi primitives (this is more involved, and may only apply if you are familiar with programming Loihi more directly). I would recommend starting with the first approach, unless the activation can clearly be replaced.

You may also find it helpful to refer to the nengo_dl.Converter documentation.

If the above doesn’t help you, but you are comfortable with cloning and installing from GitHub, another debugging option is for you to clone the nengo-loihi repository, pip install -e ., open nengo_loihi/builder/ensemble.py and edit the BuildError that you’ve mentioned to include the actual neuron type, i.e., by changing it to:

    raise BuildError(
        "The neuron type %r cannot be simulated on Loihi. Please either "
        "switch to a supported neuron type like LIF or "
        "SpikingRectifiedLinear, or explicitly mark ensembles using this "
        "neuron type as off-chip with\n"
        "  net.config[ensembles].on_chip = False" % neurontype
    )

You could also print out some more debug information at the point where build_neurons is being called, to see which ensemble it is trying to build. We plan to have this error message fixed soon.

Hi @arvoelke,

Thank you very much for the detailed explanation. After adding the codes you attached above, it seems probably nengo_dl.Converter does not replace original relu activation functions with spiking relu specified by swap_activations argument. The following is the code segment for conversion, and model is a keras model object defined with tf.nn.relu activation functions.

nengo_converter = nengo_dl.Converter(model, allow_fallback=False, swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()})

assert nengo_converter.verify()

net = nengo_converter.net

for ensemble in net.ensembles: print(ensemble, ensemble.neuron_type)

And the output is:

<Ensemble “conv2d.0”> RectifiedLinear()
<Ensemble “conv2d_1.0”> RectifiedLinear()
<Ensemble “conv2d_2.0”> RectifiedLinear()
<Ensemble “conv2d_3.0”> RectifiedLinear()
<Ensemble “conv2d_4.0”> RectifiedLinear()
<Ensemble “conv2d_5.0”> RectifiedLinear()
<Ensemble “conv2d_6.0”> RectifiedLinear()
<Ensemble “conv2d_7.0”> RectifiedLinear()
<Ensemble “conv2d_8.0”> RectifiedLinear()
<Ensemble “conv2d_9.0”> RectifiedLinear()
<Ensemble “dense.0”> RectifiedLinear()

I was wondering if it is the reason why the loihi simulator cannot work. Could you let me know if I can work around this issue? Thank you very much.

Will

Could you try using the converter with swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()} instead? It seems the network contains regular (non-spiking) ReLU, which need to be converted to spiking in order to run on Loihi.

Hello @arvoelke,

Thank you for your suggestions. I tried and it seemed all the ReLU were replaced with nengo.SpikingRectifiedLinear correctly. However, the simulator raised the following error:

Traceback (most recent call last):
File “/nengo-examples/cifar10_converter.py”, line 191, in
loihi_simulation(net)
File “/nengo-examples/cifar10_converter.py”, line 173, in loihi_simulation
net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py”, line 145, in init
self.model.build(network)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/network.py”, line 94, in build_network
model.build(conn)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py”, line 65, in build_connection
build_host_to_chip(model, conn)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py”, line 185, in build_host_to_chip
build_chip_connection(model, receive2post)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py”, line 644, in build_chip_connection
assert post_slice == slice(None)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

Will

Hi @Will_Cheung,

That error is caused by the fact that, currently, Nengo Loihi does not support advanced indexing on connections from the Host to the Chip. If I had to guess, that feature is present in your converted network because you have convolutional layers (convolutional biases are implemented using advanced indexing by the Nengo DL Converter).

We do plan on adding support for advanced indexing in Nengo Loihi, but in the meantime you can work around this by setting use_bias=False on your Keras convolutional layers (and you could add a separate, non-convolutional, bias layer if you still want to have biases in the network).

Also, if you have the chance could you post your Keras network definition? swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()} should work, so I’m curious why it didn’t in your case.

Hi @drasmuss,

Thanks for the reply. I followed your suggestions and removed all biases in the network. If I use the converter with swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()}, then I got the following error:

Blockquote
Traceback (most recent call last):
File “/cifar10_converter.py”, line 191, in
loihi_simulation(net)
File “/cifar10_converter.py”, line 173, in loihi_simulation
net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py”, line 145, in init
self.model.build(network)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/network.py”, line 94, in build_network
model.build(conn)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py”, line 65, in build_connection
build_host_to_chip(model, conn)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py”, line 133, in build_host_to_chip
"Conv2D transforms not supported for off-chip to "
nengo.exceptions.BuildError: Conv2D transforms not supported for off-chip to on-chip connections where pre is not a Neurons object.

If I use the converter with swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()}, then the error became:

Blockquote
Traceback (most recent call last):
File “/cifar10_converter.py”, line 191, in
loihi_simulation(net)
File “/cifar10_converter.py”, line 173, in loihi_simulation
net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py”, line 145, in init
self.model.build(network)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/network.py”, line 78, in build_network
model.build(obj)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/ensemble.py”, line 98, in build_ensemble
model.build(ens.neuron_type, ens.neurons, block)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py”, line 208, in build
built = model.builder.build(model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py”, line 242, in build
return cls.builders[obj_cls](model, obj, *args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/ensemble.py”, line 132, in build_neurons
"The neuron type %r cannot be simulated on Loihi. Please either "
nengo.exceptions.BuildError: The neuron type %r cannot be simulated on Loihi. Please either switch to a supported neuron type like LIF or SpikingRectifiedLinear, or explicitly mark ensembles using this neuron type as off-chip with
net.config[ensembles].on_chip = False

I attached my Keras network definition here:

Blockquote
def vgg13(original_shape, input_shape, output_shape):
inp = tf.keras.Input(input_shape)
y = tf.keras.layers.Reshape(original_shape)(inp)
y = tf.keras.layers.Conv2D(64, (3, 3),padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(64, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Conv2D(128, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(128, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Conv2D(256, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(256, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(256, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Conv2D(512, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(512, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.Conv2D(512, (3, 3), padding=‘same’, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
y = tf.keras.layers.Flatten()(y)
# y = tf.keras.layers.Dense(4096, activation=tf.nn.relu)(y)
# y = tf.keras.layers.Dense(4096, activation=tf.nn.relu)(y)
y = tf.keras.layers.Dense(1000, kernel_initializer=‘he_normal’, activation=tf.nn.relu, use_bias=False)(y)
out = tf.keras.layers.Dense(output_shape, use_bias=False)(y)
model = tf.keras.Model(inputs=inp, outputs=out)
model.summary()
model.compile(optimizer=tf.optimizers.Adam(),
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[“accuracy”])
return model

Will

This is another limitation in the current Loihi implementation, where host to chip convolutional connections must originate from a neuron object (not directly from a node). To work around this the easiest way is to explicitly mark the first layer of your network to be simulated off-chip (so that the host->chip connection from the first to second layer will originate from a neuron object, not a node).

You can do that in your model by adding

nengo_loihi.add_params(converter.net)
layer_ens = converter.layer_map[model.layers[2]][0][0]
converter.net.config[layer_ens.ensemble].on_chip = False

I wasn’t able to reproduce this error. Here is what I am running:

def vgg13(original_shape, input_shape, output_shape):
    inp = tf.keras.Input(input_shape)
    y = tf.keras.layers.Reshape(original_shape)(inp)
    y = tf.keras.layers.Conv2D(
        64,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        64,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
    y = tf.keras.layers.Conv2D(
        128,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        128,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
    y = tf.keras.layers.Conv2D(
        256,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        256,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        256,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
    y = tf.keras.layers.Conv2D(
        512,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        512,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.Conv2D(
        512,
        (3, 3),
        padding="same",
        kernel_initializer="he_normal",
        activation=tf.nn.relu,
        use_bias=False,
    )(y)
    y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
    y = tf.keras.layers.Flatten()(y)
    # y = tf.keras.layers.Dense(4096, activation=tf.nn.relu)(y)
    # y = tf.keras.layers.Dense(4096, activation=tf.nn.relu)(y)
    y = tf.keras.layers.Dense(
        1000, kernel_initializer="he_normal", activation=tf.nn.relu, use_bias=False
    )(y)
    out = tf.keras.layers.Dense(output_shape, use_bias=False)(y)
    model = tf.keras.Model(inputs=inp, outputs=out)
    model.summary()
    model.compile(
        optimizer=tf.optimizers.Adam(),
        loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
        metrics=["accuracy"],
    )
    return model


model = vgg13((64, 64, 3), (64, 64, 3), 10)

converter = nengo_dl.Converter(
    model, swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()}
)
for ens in converter.net.all_ensembles:
    print(ens.neuron_type)

which prints out

SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()
SpikingRectifiedLinear()

indicating that all the activations have been swapped from tf.nn.relu to SpikingRectifiedLinear, as we’d expect. I’m not sure why that isn’t working for you. What TensorFlow version are you using? (I’m not sure how that could change anything, but don’t know what else it could be).

Hi @drasmuss,

Thanks for the reply. I am using tensorflow 2.1. In the case of swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()}, I used the following code to print neuron_types as mentioned above:

Blockquote
nengo_converter = nengo_dl.Converter(model, allow_fallback=False, swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()})
net = nengo_converter.net
for ensemble in net.ensembles: print(ensemble, ensemble.neuron_type)

and the following is the output:

Blockquote
<Ensemble “conv2d.0”> RectifiedLinear()
<Ensemble “conv2d_1.0”> RectifiedLinear()
<Ensemble “conv2d_2.0”> RectifiedLinear()
<Ensemble “conv2d_3.0”> RectifiedLinear()
<Ensemble “conv2d_4.0”> RectifiedLinear()
<Ensemble “conv2d_5.0”> RectifiedLinear()
<Ensemble “conv2d_6.0”> RectifiedLinear()
<Ensemble “conv2d_7.0”> RectifiedLinear()
<Ensemble “conv2d_8.0”> RectifiedLinear()
<Ensemble “conv2d_9.0”> RectifiedLinear()
<Ensemble “dense.0”> RectifiedLinear()

If I used swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()}, then the output became:

Blockquote
<Ensemble “conv2d.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_1.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_2.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_3.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_4.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_5.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_6.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_7.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_8.0”> SpikingRectifiedLinear()
<Ensemble “conv2d_9.0”> SpikingRectifiedLinear()
<Ensemble “dense.0”> SpikingRectifiedLinear()

which is the same as yours. I noticed that I printed out neuron_type of net.ensembles and in your code you printed out the one of net.all_ensembles, so are these two attributes different?

Will

I’m also using TF 2.1, so I’m really at a loss as to what the difference could be that’s making the swap not work in your environment. net.ensembles and net.all_ensembles are the same in this case.

Anyway, the swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()} is a perfectly fine solution, it’s just curious that the other method isn’t working for you.

Hello @drasmuss ,

Below is the entire code which I took from https://www.nengo.ai/nengo-dl/examples/tensorflow-models.html and modified. It includes nengo_dl simulation and nengo_loihi simulation.

import tensorflow as tf
import os
import sys
import time
import pickle

import numpy as np
import nengo
import nengo_dl
import nengo_loihi

def classification_accuracy(y_true, y_pred):
    return 100 * tf.metrics.sparse_categorical_accuracy(
        y_true[:, -1], y_pred[:, -1])

def simple_cnn(original_shape, input_shape, output_shape, padding='valid'):
    inp = tf.keras.Input(input_shape)
    y = tf.keras.layers.Reshape(original_shape)(inp)
    y = tf.keras.layers.Conv2D(32, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    # y = nengo_dl.Layer(nengo.SpikingRectifiedLinear(amplitude=1))(y)
    # activation=tf.nn.relu,
    y = tf.keras.layers.Conv2D(32, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.AveragePooling2D(pool_size=(2, 2))(y)
    y = tf.keras.layers.Conv2D(64, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.Conv2D(64, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.GlobalAveragePooling2D()(y)
    y = tf.keras.layers.Flatten()(y)
    y = tf.keras.layers.Dense(256, use_bias=False, activation=tf.nn.relu, )(y)
    out = tf.keras.layers.Dense(output_shape, use_bias=False)(y)
    model = tf.keras.Model(inputs=inp, outputs=out)
    model.summary()
    model.compile(optimizer=tf.optimizers.Adam(),
                  loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=["accuracy"])
    return model

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

original_shape = x_train.shape[1:]
minibatch_size = 200

x_train = x_train.reshape((x_train.shape[0], -1))
x_test = x_test.reshape((x_test.shape[0], -1))

input_shape = x_train.shape[1:]
output_shape = 10

model = simple_cnn(original_shape, input_shape, output_shape, padding='valid')
# model.load_weights('weight.h5')

model.fit(x_train, y_train,
          # validation_split=0.2,
          verbose=1,
          batch_size=minibatch_size,
          epochs=100)
model.save('weight.h5')

print("Test accuracy:", model.evaluate(x_test, y_test, verbose=0)[1])

n_steps = 30
x_test = np.tile(x_test[:, None, :], (1, n_steps, 1))
y_test = np.tile(y_test[:, :, None], (1, n_steps, 1))

nengo_converter = nengo_dl.Converter(model, allow_fallback=False)
out_p = nengo_converter.net.probes[0]

with nengo_dl.Simulator(nengo_converter.net, minibatch_size=minibatch_size) as sim:
    sim.compile(loss={out_p: classification_accuracy})
    print("accuracy with ReLU: %.2f%%" %
          sim.evaluate(
              x_test, {out_p: y_test}, verbose=0)["loss"])

nengo_converter = nengo_dl.Converter(model, allow_fallback=False,
                                     swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()})
out_p = nengo_converter.net.probes[0]

for ensemble in nengo_converter.net.ensembles:
    print(ensemble, ensemble.neuron_type)

with nengo_dl.Simulator(nengo_converter.net, minibatch_size=minibatch_size) as sim:
    sim.compile(loss={out_p: classification_accuracy})
    print("accuracy with Spiking ReLU: %.2f%%" %
          sim.evaluate(
              x_test, {out_p: y_test}, verbose=0)["loss"])

dt = 0.001
presentation_time = 0.1
n_presentations = 50

nengo_loihi.add_params(nengo_converter.net)
layer_ens = nengo_converter.layer_map[model.layers[2]][0][0]
nengo_converter.net.config[layer_ens.ensemble].on_chip = False

hw_opts = dict(snip_max_spikes_per_step=120)
with nengo_loihi.Simulator(
        nengo_converter.net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
) as sim:
    # run the simulation on Loihi
    sim.run(n_presentations * presentation_time)

    # check classification accuracy
    step = int(presentation_time / dt)
    output = sim.data[out_p][step - 1::step]

    correct = 100 * np.mean(
        np.argmax(output, axis=-1)
        == test_labels[:n_presentations, -1, 0]
    )
    print("loihi accuracy: %.2f%%" % correct)

Here is the output:

Test accuracy: 0.7593
......
accuracy with ReLU: 75.93%
<Ensemble "conv2d.0"> SpikingRectifiedLinear()
<Ensemble "conv2d_1.0"> SpikingRectifiedLinear()
<Ensemble "conv2d_2.0"> SpikingRectifiedLinear()
<Ensemble "conv2d_3.0"> SpikingRectifiedLinear()
<Ensemble "dense.0"> SpikingRectifiedLinear()
......
Optimizing graph: creating signals finished in 0:00:00
Optimization finished in 0:00:00
accuracy with Spiking ReLU: 10.00%
Traceback (most recent call last):
  File "/loihi_error_reproduce.py", line 93, in <module>
    nengo_converter.net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py", line 145, in __init__
    self.model.build(network)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py", line 208, in build
    built = model.builder.build(model, obj, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py", line 242, in build
    return cls.builders[obj_cls](model, obj, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/nengo/builder/network.py", line 94, in build_network
    model.build(conn)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/builder.py", line 208, in build
    built = model.builder.build(model, obj, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/nengo/builder/builder.py", line 242, in build
    return cls.builders[obj_cls](model, obj, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py", line 65, in build_connection
    build_host_to_chip(model, conn)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/builder/connection.py", line 133, in build_host_to_chip
    "Conv2D transforms not supported for off-chip to "
nengo.exceptions.BuildError: Conv2D transforms not supported for off-chip to on-chip connections where `pre` is not a Neurons object.

After swapping activation functions from ReLU to spiking ReLU, the model’s test accuracy decreased from 0.759 to 0.1. I guess after the swapping the model needs to be retrained, or it should use spiking ReLU to replace ReLU before training.

Also, I added the following lines of code

nengo_loihi.add_params(nengo_converter.net)
layer_ens = nengo_converter.layer_map[model.layers[2]][0][0]
nengo_converter.net.config[layer_ens.ensemble].on_chip = False

but still the loihi simulator raised the error. I have removed all bias terms in the model and used valid padding. Could you advise me where in the code I should modify? Thank you very much.

Will

Ah, the on_chip=False fixed one of the invalid connections, but there are two more I didn’t think of, caused by the average pooling layers. Those get implemented as Nodes, since they don’t have a nonlinearity associated with them, and then the convolutional layers following those average pooling layers trigger that same Loihi limitation that all convolutional connections must originate from neuron objects.

The quickest way to resolve that would probably be to remove the average pooling layers, and replace them with (strided) convolution layers instead. That way everything will just stay on chip. Alternatively, you could add, for example, ReLU layers after the average pooling layers (so that there would be a nonlinearity, meaning that the following convolutional connections would originate from a neuron object and thereby map onto Loihi without problem).

There could be a lot of reasons for a drop in accuracy when switching to spiking neurons (there’s still a lot of art to doing spiking DL). But one quick issue I see in your code is that when evaluating the performance of a spiking model, you almost certainly want to have a synaptic filter on your output probe. Otherwise you’re just going to be looking at the raw spikes (1s or 0s) coming out of the last layer, which, unless the correct output neuron just happened to spike on the last timestep in particular, will not tell you which neurons are outputting the largest values. You need the synaptic filter to average over the spikes, so that you can tell which neurons are spiking the fastest. You can see an example of this in https://www.nengo.ai/nengo-dl/examples/spiking-mnist.html (see out_p_filt).

Hi @drasmuss

Thank you for your suggestions. I modified the previous code by replacing average pooing layers with strided convolutional layers, and also added low pass filters with tau=0.1. I am not sure if it is the right filter to add in my case. The test accuracies of the converted models were stil 0.1. This time, the loihi simulator raised the following error:

 Traceback (most recent call last):
  File "/loihi_error_reproduce.py", line 112, in <module>
    nengo_converter.net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py", line 208, in __init__
    self.sims["emulator"] = EmulatorInterface(self.model, seed=seed)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/emulator/interface.py", line 40, in __init__
    validate_model(model)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 16, in validate_model
    validate_block(block)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 21, in validate_block
    validate_compartment(block.compartment)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 61, in validate_compartment
    % (comp.n_compartments, N_MAX_COMPARTMENTS)
nengo.exceptions.BuildError: Number of compartments (6272) exceeded max (1024)

Below is the updated code. For fast execution, I only trained the model for one epoch.

import tensorflow as tf
import os
import sys
import time
import pickle

import numpy as np
import nengo
import nengo_dl
import nengo_loihi

def classification_accuracy(y_true, y_pred):
    return 100 * tf.metrics.sparse_categorical_accuracy(
        y_true[:, -1], y_pred[:, -1])

def simple_cnn(original_shape, input_shape, output_shape, padding='valid'):
    inp = tf.keras.Input(input_shape)
    y = tf.keras.layers.Reshape(original_shape)(inp)
    y = tf.keras.layers.Conv2D(32, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    # y = nengo_dl.Layer(nengo.SpikingRectifiedLinear(amplitude=1))(y)
    # activation=tf.nn.relu,
    y = tf.keras.layers.Conv2D(32, (3, 3), padding=padding, strides=2,activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.Conv2D(64, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.Conv2D(64, (3, 3), padding=padding, strides=2, activation=tf.nn.relu, use_bias=False)(y)

    y = tf.keras.layers.Conv2D(96, (3, 3), padding=padding, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.Conv2D(96, (3, 3), padding=padding, strides=2, activation=tf.nn.relu, use_bias=False)(y)
    y = tf.keras.layers.Flatten()(y)
    y = tf.keras.layers.Dense(96, use_bias=False, activation=tf.nn.relu)(y)
    out = tf.keras.layers.Dense(output_shape, use_bias=False)(y)
    model = tf.keras.Model(inputs=inp, outputs=out)
    model.summary()
    model.compile(optimizer=tf.optimizers.Adam(),
                  loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=["accuracy"])
    return model

(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()

x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.

original_shape = x_train.shape[1:]
minibatch_size = 200

x_train = x_train.reshape((x_train.shape[0], -1))
x_test = x_test.reshape((x_test.shape[0], -1))

input_shape = x_train.shape[1:]
output_shape = 10

model = simple_cnn(original_shape, input_shape, output_shape, padding='valid')
# model.load_weights('weight.h5')

model.fit(x_train, y_train,
          # validation_split=0.2,
          verbose=1,
          batch_size=minibatch_size,
          epochs=1)
model.save('weight.h5')

print("Test accuracy:", model.evaluate(x_test, y_test, verbose=0)[1])

n_steps = 30
x_test = np.tile(x_test[:, None, :], (1, n_steps, 1))
y_test = np.tile(y_test[:, :, None], (1, n_steps, 1))

nengo_converter = nengo_dl.Converter(model, allow_fallback=False)
out_p = nengo_converter.net.probes[0]

with nengo_dl.Simulator(nengo_converter.net, minibatch_size=minibatch_size) as sim:
    sim.compile(loss={out_p: classification_accuracy})
    print("accuracy with ReLU: %.2f%%" %
          sim.evaluate(
              x_test, {out_p: y_test}, verbose=0)["loss"])

nengo_converter = nengo_dl.Converter(model, allow_fallback=False,
                                     swap_activations={nengo.RectifiedLinear(): nengo.SpikingRectifiedLinear()})
out_p = nengo_converter.net.probes[0]

for ensemble in nengo_converter.net.ensembles:
    print(ensemble, ensemble.neuron_type)

with nengo_dl.Simulator(nengo_converter.net, minibatch_size=minibatch_size) as sim:
    sim.compile(loss={out_p: classification_accuracy})
    print("accuracy with Spiking ReLU without synapse: %.2f%%" %
          sim.evaluate(
              x_test, {out_p: y_test}, verbose=0)["loss"])

out_p.synapse=0.1
with nengo_dl.Simulator(nengo_converter.net, minibatch_size=minibatch_size) as sim:
    sim.compile(loss={out_p: classification_accuracy})
    print("accuracy with Spiking ReLU with synapse: %.2f%%" %
          sim.evaluate(
              x_test, {out_p: y_test}, verbose=0)["loss"])

dt = 0.001
presentation_time = 0.1
n_presentations = 50

nengo_loihi.add_params(nengo_converter.net)
layer_ens = nengo_converter.layer_map[model.layers[2]][0][0]
nengo_converter.net.config[layer_ens.ensemble].on_chip = False

hw_opts = dict(snip_max_spikes_per_step=120)
with nengo_loihi.Simulator(
        nengo_converter.net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
) as sim:
    # run the simulation on Loihi
    sim.run(n_presentations * presentation_time)

    # check classification accuracy
    step = int(presentation_time / dt)
    output = sim.data[out_p][step - 1::step]

    correct = 100 * np.mean(
        np.argmax(output, axis=-1)
        == test_labels[:n_presentations, -1, 0]
    )
    print("loihi accuracy: %.2f%%" % correct)

Will

tau=0.1 is probably a bit high (0.1 refers to a lowpass filter with a time constant of 100 timesteps, while we’re only presenting each input for 30 timesteps). You could also present each input for more timesteps, which lets you use a longer time constant. Another thing you’ll want to look at is the firing rates (i.e., how many spikes are those output neurons emitting?). If they’re only spiking a handful of times in that 30 timestep window, then there just isn’t a lot of data to go off of (no matter what kind of filtering you apply). If the firing rates are too low, you may be able to improve performance by scaling the gains and biases (the Ensemble.gain/bias parameters) on the neural layers. Another option is to train the model with some regularization terms that encourage higher firing rates.

That error means that the model has layers with too many neurons to fit on one Loihi core. So the simple fix is to decrease the size of your model. Or you can try out a new feature that was just added to Nengo Loihi that will automatically split up large layers across cores (see https://github.com/nengo/nengo-loihi/pull/264). You will need to pull the latest code from github and do a developer installation to get that, as it hasn’t yet been released. Since it’s so new we haven’t made a proper example of how to use it yet, but if you run into issues @Eric can help you out.

Thanks for the advice. Since you mentioned that my model could be too large to fit in the loihi simulator, I tried a small CNN architecture and the error became as follows:

Traceback (most recent call last):
  File "/loihi_error_reproduce.py", line 109, in <module>
    nengo_converter.net, dt=dt, precompute=False, hardware_options=hw_opts, remove_passthrough=False
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/simulator.py", line 208, in __init__
    self.sims["emulator"] = EmulatorInterface(self.model, seed=seed)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/emulator/interface.py", line 40, in __init__
    validate_model(model)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 16, in validate_model
    validate_block(block)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 49, in validate_block
    validate_synapse(synapse)
  File "/usr/local/lib/python3.6/dist-packages/nengo_loihi/validate.py", line 82, in validate_synapse
    % (min_base, max_base)
AssertionError: compartment base must be >= -1 and < 256 (-1 indicating unused)

Also, I set the number of timesteps to 100 with 0.1 synapse value, the accuracy with spiking relu was still 0.1, while the one with regular relu is 0.58. Could you advise me how to set the synapse value or there exists a rule of thumb to calculate it? Thanks.

Will

That error means that the model still isn’t fitting on Loihi (but in a different way). Updating to the most recent version of Nengo Loihi will let you use bigger models, or you can try to continue making your model smaller (also making sure that your number of filters is a multiple of 4 might help).

Unfortunately there isn’t a way to calculate the perfect synapse value, it depends on your model and your data. As is often the case in deep learning (e.g. with learning rates), you probably just need to play with different values to find one that works best. But as I mentioned above, there could be other issues besides the synapse. You may need to look at other factors, like the firing rates, to improve the accuracy of your model.

This error message is telling you that the boolean value of the entire array is unclear because it has multiple elements.

To resolve this ambiguity, NumPy provides two methods that can be used to determine the truth value of an array with multiple elements: a.any() and a.all().

  • a.any() : returns True if at least one element in the array a is True.
  • a.all() : returns True only if all the elements in the array a are True.

So, instead of using the array itself as a condition, you should use one of these methods to determine its truth value.