Removing Probes and Adjusting Learning Rates on Nengo Loihi

Hello,

I’m relatively new to Nengo, NengoDL, and Nengo Loihi, so I apologize if these are simple questions or ones that were answered that I wasn’t able to find through the search.

I’m currently trying to run a converted Keras model on a Loihi chip, and I’m using their NxSDK probes to evaluate the results. The model is identical to the one in the Keras to Loihi SNN notebook, though I’m using a different dataset as an input. I’m also training the model in Keras and just converting the results with NengoDL I noticed that in the execution time plots, a very large amount of the time was spent in the host phase.

When I read through the NxSDK documentation, it seemed like the creation of certain monitor probes affected these host times, so I found the areas where they were created on the Nengo Loihi side and attempted to remove them. I managed to reduce the host times with this method. However, removing the probes ended up affecting the final accuracy of the output, causing it to increase for some reason. The following is a snippet of how I attempted to remove the monitor probes:

loihi_sim = nengo_loihi.Simulator(net,dt=dt)

# Trying to remove probes from the Loihi model
from collections import OrderedDict
loihi_sim.model.probes = []
loihi_sim.model.nengo_probes = []
loihi_sim.model.chip2host_params = {}
loihi_sim.model.chip2host_receivers = OrderedDict()


board = loihi_sim.sims["loihi"].nxsdk_board

# Trying to remove probes from the board
board.monitor.probeConditionToProbeMap = {}
board.monitor.probeConditionToProbeMap
board.numProbes = 0

Is there a better way to remove all the monitor probes created by the board and model objects? Or would anything I’ve done in the code snippet affect the final results of the run?

I also have a similar question about forcing the learning rates to stop. Since I’m using a pretrained Keras model converted through NengoDL, it didn’t seem to make sense that there was a learning phase associated with the run. I forced the Nengo Loihi simulator to stop learning by using the following code:

# Removing Learning Rules from board
core = board.nxChips[0].nxCores

for i in core:
    core[i].stdpPreProfileCfg[0].configure(updateAlways=0)
    core[i].timeState[0].configure(tepoch=0)
    core[i].stdpCfg[0].configure(firstLearningIndex = 0)
    core[i].numUpdates[0].configure(numStdp=0)

Is there a better way to stop learning on Nengo Loihi? This method reduced the learning time to 0, but for some reason it also increased the amount of time and energy for the run when I looked at the output of the probes.

Apologies for the long post. Please let me know if I can clarify any of my points. Any help or guidance would be greatly appreciated.

The first thing I would try to figure out regarding the probes is what they’re being used for. NengoLoihi doesn’t probe everything in the model, it only probes objects that have a Probe object attached, or connections from an on-chip object to a host object (e.g. an Ensemble to a Node).

You can get all the Probe objects for your network with network.all_probes; looking at the .target attribute for each one should hopefully help clarify what’s being probed. It would also be worth looking at all_nodes, to make sure there aren’t additional connections between the chip and the host than what you expect. The Keras-to-Nengo converter may use Nodes at some points to facilitate connections; these would ideally be removed by NengoLoihi’s remove_passthrough simulator option, but it’s quite possible that some aren’t getting removed for one reason or another.

As for “stopping learning”, that’s not something we’ve ever played around with. If there are no learning connections (which there shouldn’t be in your model), then there should be no learning happening in the model, and any time spent in the learning phase should be negligible. I’ve never noticed time spent in the learning phase on non-learning models before. If you’re seeing this on your model, that is surprising; unfortunately, I don’t have any other suggestions as to how to avoid this, other than to dig more into why it’s happening.

Finally, I’ll just note that the Keras to Nengo converter is a NengoDL tool, which is designed to bring a range of Keras models into Nengo easily. It is not optimized for NengoLoihi, and for any particular model, there is likely a way to do it more efficiently for NengoLoihi by building the model directly using the Nengo API. So that’s the approach that I’d take if you really want to get the best speed on Loihi.

Thanks for your suggestions Eric!

I tried looking at the probes and nodes in the network, but when I try to look at the output, the only result I’m getting is:

{'_initialized': True}

I’m getting to this output by using either of the following commands:

loihi_sim.network.all_nodes[0].__dict__
loihi_sim.network.all_probes[0].__dict__

Am I looking at the correct objects, or is there a bigger issue that I’m not seeing?

For the learning, would it be expected to have times in the learning phase around 5 microseconds? The learning times that I’m seeing are around this level, so it would seem they are negligible in comparison to the spiking times that I’m seeing (around 60-70 microseconds). I was expecting to see the times at 0 microseconds, but I might have had the wrong assumptions.

And thank you for the note on the converter - if it comes down to it, I’ll try using the Nengo API instead to create my model.