Hi all,
first of all I would say a big thanks to all the Nengo developers for their great work !
I am using the Keras-to-Loihi example to make some tests on Loihi.
When reading the latency results after running the network, I obtained that the spikingTimePerTimeStep field of the energy probe object contains all zero values instead the other fields (host time, management time, …) seem ok.
Do you know why spiking time is zero ? Is this related to the use of PresentInput ?
I created a Jupyter notebook that reproduce the issue. You can find it at the following link: https://drive.google.com/file/d/1XBVF52UCpVH2ShN6hKcLbGvdy2G4LLbY/view?usp=sharing
We’ve had issues before where time that one would think should be spent in the spiking phase actually ends up being spent in other phases (such as management, though sometimes even some time in the learning phase, even though we don’t have any learning going on). We’re not sure why this is; you would have to ask on the INRC forum.
We have found that making some changes to the network can sometimes resolve this issue, for example removing all probes and changing the network so the inputs are entirely on-chip (for example, by using an Ensemble with fixed biases). Of course, these workarounds aren’t great; for example, you typically want to measure at least something in your network, so getting rid of probes isn’t a long term solution. We have also found that upgrading to a newer NxSDK version can also help, though I am seeing a similar issue on a similar network with NxSDK 0.9.8.
Sorry I can’t be of more help, but overall, it’s also a mystery to us why time is spent in particular phases.
The plotExecutionTime function on your energy probe can help with visualizing how time is spent across all timesteps of your simulation.
The last thing I’ll mention is that if you’re creating your nengo_loihi.Simulator with precompute=False, then the SNIPs that it creates will happen in the management phase. So if possible, use precompute=True. In your case, though, you’re using the default, which should always use precompute=True if possible, and I think it should be possible for your network. You can always explicitly pass precompute=True to double-check, though.
Considering the output of the plotExecutionTime and the values in the arrays that show the time spent in each phase (spikingTimePerTimeStep, managementTimePerTimeStep, …) it seems that the time related to the spiking phase it is instead spent in the learning phase even if there is no learning since we are performing inference so learning should be disabled.
I think that this behavior may be related to the SNIPs created by the Nengo Loihi simulator.
If possible, could you please check this ?
The only snip that should be executing in the learning phase is the learning snip, and if you don’t have any learning connections, it shouldn’t be doing much of anything. However, you could try turning it off anyway, just to see if it helps. The easiest way is probably to edit nengo_loihi/hardware/snips/nengo_learn.c.template so that the guard function returns 0 instead of 1. That should stop the snip from running at all.