Saving the model after training in Nengo Core

The standard way to serialize a Nengo model is to use Python’s pickle library. You can use pickle.dump to save the sim object, and pickle.load to load it.

I’m curious how long the training in that example takes for you. The training (or, in our case, optimization) occurs when you nengo.Simulator. Does that take a long time for you? The time spent doing sim.run is not doing any training, just inference, so you can’t save any of that cost. I suspect that pickling/unpickling the simulator will not save much time compared to running the model normally.

Also, you might be interested to know that the example you’re looking at is quite old at this point, and the way that we currently recommend doing deep learning with Nengo is through the NengoDL package. You can see the spiking MNIST NengoDL example which does something similar. NengoDL also has a nicer interface for saving and loading models, the save/load_params methods.

Hope that helps!