Hmmm, I see that you have a bit of a predicament. I’ll try an elaborate on some of your options below:
NengoOCL doesn’t limit the number of Nengo model you can run on one GPU. However, from my experience, running multiple Nengo models on a single GPU doesn’t improve your throughput at all. Rather, because of the additional I/O needed, you get no gains in throughput (i.e., running 2 models on one GPU halves the speed of both simulations, resulting in a net gain of 0).
Thus, you really only have one option here, and that is to run your code on multiple GPUs. I’m not sure how to do it on Google Colab, but to get an improvement in your throughput, you’ll need to spawn multiple instances on your Jupyter notebook on separate GPUs.
@Eric is the primary author of NengoOCL, and he may have some suggestions on how to speed up your network with NengoOCL.
NengoSpinnaker is designed to run Nengo simulations in real time, so, doing some quick math, one simulation run with 70,000 images (at 350ms each) will take roughly 7 hours to complete. That’s slightly more manageable, but 100 runs will still take roughly a month to finish collecting all of the data. You can improve the throughput by running NengoSpinnaker simulations in parallel, but because one SpiNNaker board supports only one NengoSpinnaker model, you’ll need access to multiple SpiNNaker boards to accomplish this.
I suspect the reason why your model is taking a while to finish running is because of the online learning rule that is incorporated into the model. We do have other tools (e.g., NengoDL) that can perform the learning in an offline manner, but will train the model much quicker. I’m not sure what your goals are with your network are, but it may be worthwhile investigating NengoDL as well (see this example for an MNIST example).