Hello,
I’m relatively new to Nengo, NengoDL, and Nengo Loihi, so I apologize if these are simple questions or ones that were answered that I wasn’t able to find through the search.
I’m currently trying to run a converted Keras model on a Loihi chip, and I’m using their NxSDK probes to evaluate the results. The model is identical to the one in the Keras to Loihi SNN notebook, though I’m using a different dataset as an input. I’m also training the model in Keras and just converting the results with NengoDL I noticed that in the execution time plots, a very large amount of the time was spent in the host phase.
When I read through the NxSDK documentation, it seemed like the creation of certain monitor probes affected these host times, so I found the areas where they were created on the Nengo Loihi side and attempted to remove them. I managed to reduce the host times with this method. However, removing the probes ended up affecting the final accuracy of the output, causing it to increase for some reason. The following is a snippet of how I attempted to remove the monitor probes:
loihi_sim = nengo_loihi.Simulator(net,dt=dt)
# Trying to remove probes from the Loihi model
from collections import OrderedDict
loihi_sim.model.probes = []
loihi_sim.model.nengo_probes = []
loihi_sim.model.chip2host_params = {}
loihi_sim.model.chip2host_receivers = OrderedDict()
board = loihi_sim.sims["loihi"].nxsdk_board
# Trying to remove probes from the board
board.monitor.probeConditionToProbeMap = {}
board.monitor.probeConditionToProbeMap
board.numProbes = 0
Is there a better way to remove all the monitor probes created by the board and model objects? Or would anything I’ve done in the code snippet affect the final results of the run?
I also have a similar question about forcing the learning rates to stop. Since I’m using a pretrained Keras model converted through NengoDL, it didn’t seem to make sense that there was a learning phase associated with the run. I forced the Nengo Loihi simulator to stop learning by using the following code:
# Removing Learning Rules from board
core = board.nxChips[0].nxCores
for i in core:
core[i].stdpPreProfileCfg[0].configure(updateAlways=0)
core[i].timeState[0].configure(tepoch=0)
core[i].stdpCfg[0].configure(firstLearningIndex = 0)
core[i].numUpdates[0].configure(numStdp=0)
Is there a better way to stop learning on Nengo Loihi? This method reduced the learning time to 0, but for some reason it also increased the amount of time and energy for the run when I looked at the output of the probes.
Apologies for the long post. Please let me know if I can clarify any of my points. Any help or guidance would be greatly appreciated.