Which ANN layers does nengo_loihi support?

I’m attempting a convolutional network with 1D convolutions, average pooling, and up-sampling layers. When converting with nengo_dl and passing to the nengo_loihi simulator I get the error BuildError: nengo-loihi does not yet support 'Sparse' transforms on host to chip connections

How can I tell which layer is using sparse connections? Is there some sort of workaround I can use to continue using these layer types with Loihi?

I am also using remove_passthrough=False as a previous error indicated that I should. I didn’t find an explanation for this parameter in the documentation, can someone point me in the right place?

Hi @SumbaD, and welcome to the Nengo forums! :smiley:

Regarding your error, can you post some examples (either in a code snippet, or as an attachment) of your code that is throwing this error? It would be helpful to know which Nengo platform you are targeting (initially), and in which part of the build process the code is throwing that error.

As for the remove_passthrough parameter, some Nengo models use passthrough nodes (here’s an example of one such network) to aggregate the outputs of neural ensembles into larger dimensional vectors. In CPU and GPU computation, doing this aggregation can improve computation efficiency, as well as making the Nengo model code more readable to the user.

However, these passthrough nodes are not supported on the Loihi hardware itself (the Loihi hardware only performs neural computation, not computation in nodes), so if the Nengo network contains passthrough nodes, NengoLoihi implements them by making connections from the Loihi chip (which does the neural computation) to the host (which does the node computation) and back. While this technically achieves the functionality of passthrough nodes, this comes at a cost of computational efficiency, since the extra connections on an off the Loihi board introduce physical communication delays into the network.

Thus, the remove_passthrough parameter was added to the NengoLoihi simulator to tell the simulator to remove all of the passthrough nodes from the network before compiling it for the Loihi board. What remove_passthrough does is to “deaggregate” the aggregated ensemble outputs back into their component parts, and instead do direct ensemble-to-ensemble connections.

Thank you for your prompt response, apologies for my own delay.

I can imagine that the upsampling operation is more easily done off the Loihi chip, do you think that is the layer that is causing the converter to use passthrough nodes?

Here is a few snippets of the code I’ve been running:

#these arrays hold 16 bit integer audio data with
# 8 channels for input data and 1 channel in the output
mixes = np.array(mixes).reshape((-1,256,8))
targets = np.array(targets).reshape((-1,256,1))

#keras model
dummyModel = tf.keras.Sequential()
activation =  tf.keras.activations.relu
#model layers
dummyModel.add(tf.keras.layers.Conv1D(8,4, input_shape=(256,8), padding="same", 
    activation=activation))
dummyModel.add(tf.keras.layers.AveragePooling1D(4, padding="valid"))
dummyModel.add(tf.keras.layers.Conv1D(8,4, padding="same", activation=activation))
dummyModel.add(tf.keras.layers.AveragePooling1D(4, padding="valid"))
dummyModel.add(tf.keras.layers.UpSampling1D(size=4))
dummyModel.add(tf.keras.layers.Conv1D(8,4, padding="same", activation=activation))
dummyModel.add(tf.keras.layers.UpSampling1D(size=4))
dummyModel.add(tf.keras.layers.Conv1D(1,8, padding="same", activation=activation))

dummyModel.compile("adam", "mse")
# the model was trained for 20 epochs just so that a comparison can be
# made with a model that is slightly better than random

# these imports were taken from a code example I found in the nengo forum
# which showed how to use Loihi energy probes
from nxsdk.graph.monitor.probes import PerformanceProbeCondition
from nxsdk.api.n2a import ProbeParameter
nengo_loihi.set_defaults()

# Convert to spiking network
nengo_converter = nengo_dl.Converter(simpleModel,
                                     swap_activations={tf.keras.activations.relu: nengo.SpikingRectifiedLinear()})
snnNet = nengo_converter.net

run_time = 30
dt = 0.001

sim = nengo_loihi.Simulator(snnNet, dt=dt)

# Set up energy probe
board = sim.sims["loihi"].nxsdk_board
probe_cond = PerformanceProbeCondition(
    tStart=1, tEnd=int(run_time / dt) * 10, bufferSize=1024 * 5, binSize=4)
e_probe = board.probe(ProbeParameter.ENERGY, probe_cond)

with sim:
    sim.run(run_time)
# BuildError: nengo-loihi does not yet support 'Sparse' transforms on host to chip connections

When I switch out ```sim`` for nengo’s default simulator (omitting the energy probe setup of course) the example runs just fine which I suspected is because Loihi doesn’t support one of the layer types I am using. I simplified the network to include only convolution layers and use a functional model rather than the sequential keras model, but I still encountered the exact same error. This is the other model I tried:

#Functional model (as opposed to sequential)
inp = tf.keras.Input(shape=(256, 8, 1))

conv1 = tf.keras.layers.Conv2D(
    filters=16,
    kernel_size=(16,2),
    activation=tf.nn.relu,
)(inp)

conv2 = tf.keras.layers.Conv2D(
    filters=16,
    kernel_size=(8,4),
    activation=tf.nn.relu,
)(conv1)
# It doesn't matter that the output dimensions are different
# because I didn't attempt to train this model

simpleModel = tf.keras.Model(inputs=inp, outputs=conv2)
simpleModel.compile("adam", "mse")

Any insights as to what I’m missing would be appreciated, thank you.

Ah, the issue here is that TF uses sparse transforms to implement the biases for the convolution layers. These sparse transforms are currently not supported in NengoLoihi, so you’ll need to add use_bias=False when creating the convolution layers in TF, like so:

inp = tf.keras.Input(shape=(256, 8, 1))

conv1 = tf.keras.layers.Conv2D(
    filters=16,
    kernel_size=(16,2),
    activation=tf.nn.relu,
    use_bias=False,
)(inp)

conv2 = tf.keras.layers.Conv2D(
    filters=16,
    kernel_size=(8,4),
    activation=tf.nn.relu,
    use_bias=False,
)(conv1)
# It doesn't matter that the output dimensions are different
# because I didn't attempt to train this model

simpleModel = tf.keras.Model(inputs=inp, outputs=conv2)
simpleModel.compile("adam", "mse")

Disabling the biases for the convolution layers may reduce the representational power of your model, which may affect the trainable accuracy of your model. While not equivalent, you can set the biases of the Nengo ensembles to trainable=True to try and boost the representational power of your model. This can be done like so:

converter = nengo_dl.Converter(...)
with converter.net:
    # Set biases for all ensembles to trainable
    converter.net.config[nengo.Ensemble].trainable = True

    # Set biases for a single ensemble (e.g., conv1) to trainable
    converter.net.config[converter.layers[conv1]].trainable = True
1 Like

Applying this solution did remove that specific error, although now I’m seeing a new error on both models (with no other changes than use_bias=False):

BuildError: Conv2D transforms not supported for off-chip to on-chip connections where pre is not a Neurons object.

I assumed it was the nodes used in the upsampling layers, but this new error persists even with the simpleModel above. If its useful, when I print converter.net.all_objects on an equivalent fully convolutional model (without bias), I get the following python list:

[<Node “input_1” at 0x7f435246deb8>,
<Probe at 0x7f4352480fd0 of ‘output’ of <Neurons of <Ensemble “conv2d_3.0”>>>,
<Ensemble “conv2d.0” at 0x7f43524644a8>,
<Ensemble “conv2d_1.0” at 0x7f4352480320>,
<Ensemble “conv2d_2.0” at 0x7f4352480668>,
<Ensemble “conv2d_3.0” at 0x7f43524809b0>,
<Connection at 0x7f4352480160 from <Node “input_1”> to <Neurons of <Ensemble “conv2d.0”>>>,
<Connection at 0x7f4352480470 from <Neurons of <Ensemble “conv2d.0”>> to <Neurons of <Ensemble “conv2d_1.0”>>>,
<Connection at 0x7f43524807b8 from <Neurons of <Ensemble “conv2d_1.0”>> to <Neurons of <Ensemble “conv2d_2.0”>>>,
<Connection at 0x7f4352480b00 from <Neurons of <Ensemble “conv2d_2.0”>> to <Neurons of <Ensemble “conv2d_3.0”>>>]

Am I missing something? Is there a page somewhere that describes the internals of how nengo_loihi is representing the convolution layers? Is it similar to how nengo_dl represents convolutions (which I could still use an explanation for)?

Ah! Yes, I forgot to mention that in addition making the use_bias change, to get your code to compile and run on the Loihi board, you’ll also have to ensure that the first convolution layer is computed off-chip.

The Keras-to-Loihi example we have in the NengoLoihi documentaion provides a good guide on how to convert a convolution-based TF model into something that can run on the Loihi hardware using NengoLoihi (admittedly, we have neglected to stress the importance of use_bias, but that should be corrected shortly).

In particular, this section of the example details the steps you need in order to prepare (modify) your model to take to get data on and off the Loihi board whilst working within the confines of the Loihi ecosystem. For the off-chip to on-chip connections where pre is not a neurons object error in particular, you’ll need to create an additional convolution layer to convert the input signals into spikes that the subsequent convolution layers can process.

A TF convolution layer can be thought of as performing a linear matrix operation on the input, which the special case caveat being that it can be super optimized because one of the matrix operands (the filter) is just repeated a bunch of times. Internally, Nengo, NengoDL, and NengoLoihi do just that, they convert a convolutional layer into the corresponding matrix multiplication math needed to perform the convolution. This matrix multiplication is then implemented via the connection weights, similar to how the transform parameter works for standard nengo.Connection objects.

As a side note, the documentation for the various Nengo solutions can be found here, and we have examples on how to do various things, as well as a reference to the API for the various platforms.
Nengo: https://nengo.ai/nengo
NengoDL: https://nengo.ai/nengo-dl
NengoLoihi: https://nengo.ai/nengo-loihi