Example code of using LMU for time-series forecasting on Loihi

Hello everyone,
I have been trying to use spiking LMU for time-series forecasting on Loihi. I am using the example on the nengo website below.
https://www.nengo.ai/nengo-dl/examples/lmu.html
This example works very well for the sequential MNIST dataset on GPU but it is not running on Loihi. I was wondering if the example code could be extended to show how to run the GPU trained model on Loihi? I would like to train the spiking LMU on GPU and then implement it on Loihi for time-series forecasting.
Thank you for your help!

Hi @lsy105, and welcome to the Nengo forums! :smiley:

The NengoLoihi documentation does include an LMU example, and it can be found here.

There are two differences between the NengoLoihi and NengoDL LMU examples. First, the NengoLoihi example doesn’t use the LMU network with the sequential MNIST dataset, though it should be straight-forward to adapt the NengoLoihi LMU network to operate with the MNIST example. Second, the NengoLoihi example uses the PES learning rule to determine the output weights (the connection between ens and out) of the network. In contrast, the NengoDL LMU example uses TensorFlow to learn those weights.

While I haven’t tested it, the process of adapting the NengoDL example for NengoLoihi should be simple. First, start with the the NengoLoihi network, but strip out the PES learning bits - basically, this chunk of code:

    # we'll use a Node to compute the error signal so that we can shut off
    # learning after a while (in order to assess the network's generalization)
    err_node = nengo.Node(lambda t, x: x if t < sim_t * 0.8 else 0, size_in=1)

    # the target signal is the ideally delayed version of the input signal,
    # which is subtracted from the ensemble's output in order to compute the
    # PES error
    nengo.Connection(stim, err_node, synapse=IdealDelay(delay), transform=-1)
    nengo.Connection(out, err_node, synapse=None)

    learn_conn = nengo.Connection(
        ens, out, function=lambda x: 0, learning_rule_type=nengo.PES(5e-4)
    )
    nengo.Connection(err_node, learn_conn.learning_rule, synapse=None)

    p_out = nengo.Probe(out)

Next, you’ll need to modify the network properties (input shape, layer size, etc.) to make it work with the MNIST dataset. You can pull these parameters from the NengoDL example.

Once this is done, this network should be trainable with NengoDL without any modification. After completing the NengoDL training, you’ll need to use NengoDL simulator’s freeze_param function to convert the trained network back into the original Nengo network. This new network can then be used with the NengoLoihi simulator to run the network on Loihi.

1 Like