This is a question we received outside of the forum and I’ve taken the liberty of copying it here for posterity:
As the nengo-loihi example suite now includes an LMU example, I’d like to transition development to a cloud environment supporting this example. I see references to an “Intel cloud server” possessing the Loihi backend but could not find the documentation on gaining access to it. In any event, it appears Voelker is moving the LMU code to TensorFlow 2.0, which is a step in the right direction. This brings the LMU experiments closer to easy demonstration on high performance hardware via Chrome’s “Open in Colab” extension.
Unfortunately, access to physical Loihi devices is not available from us. For access to Loihi you will need to contact Intel directly and inquire about their Intel Neuromorphic Research Community (INRC).
That being said, if you are not specifically interested in a spiking implementation of the LMU, you should be able to run the LMU code on any server if not using spikes. In addition, you should still be able to run the example in Colab using the Loihi emulator that’s part of
The Colab platform is idiosyncratic. For instance when attempting to open the
lmu/experiments/capacity.ipynb notebook using the URL generated by the “open in colab” chrome plugin, it fails due to a number of problems. Various attempts to work around these problems, such as following the
colab-recommended approaches documented by Google Colab in “Loading Public Notebooks Directly from GitHub” also fail.
While it is theoretically possible to do a Google Drive clone (including
--recurse-submodules as is necessary for the
lmu repo, as well as perform the necessary
pip install -e .) the resulting exponential kludge puts one into configuration Hell. I got to the point that I could run a modified version of the
capacity experiment by extracting the Python code from the notebook and running it directly in a
.py file. This I had to do because there is no apparent way for one notebook to load another notebook in its own execution environment. (But that may simply be my ignorance of how
Without having used Colab myself, it’s hard to offer much specefic advice. One option, if it suits your needs, may be to simply run the training using NengoDL in Colab then run those trained waits locally with NengoLoihi.
I’ll also mention that we are currently working to make the LMU code more portable which may help the Colab setup in the future. There will be a pip installable LMU release coming and the dependencies should be less complex.
Beyond that, there is also your comment that will keep this in mind for us.
The NengoDL LMU example runs in Colab by prepending these 3 lines:
!pip install tensorflow-gpu
!pip install nengo
!pip install nengo_dl