Non-differentiable network optimization in Nengo

According to the documentation, Nengo uses gradient descent on a differentiable (non-spiking) approximation of your neuromorphic network in order to make it learn.

Isn’t being biologically accurate (and therefore non-differentiable) one of the best parts of nengo? Restricting learning to typical gradient descent seems (to me) to unnecessarily remove one of the key features of nengo and spiking models.

Does Nengo offer alternative optimization methods, such as genetic algorithms or hebbian learning?

Hi @gonen!

NengoDL is an extension to Nengo (what we call a “backend”) that uses TensorFlow under the hood to represent all operations, and thus allows gradient descent across the whole network.

You are correct that this is less biologically plausible. For biological plausibility, we recommend the features in core Nengo. We have a number of different tutorials on learning that use biologically-plausible local learning rules, including PES (which is error-based) and BCM and Oja (which are Hebbian). If you’re interested in running on a GPU, you can still use NengoDL to simulate these models, but you don’t have to use NengoDL’s support for gradient descent.

As for other methods, some people have looked at using genetic algorithms with Nengo, though we do not have built-in support for this. And if you’re interested in deep networks specifically, I’ve investigated Feedback Alignment in Nengo in chapter 6 of my thesis; it worked well for MNIST, but I found (as others have also found) that it doesn’t scale well to more challenging datasets. (Code for MNIST using Feedback Alignment in Nengo is here, but be warned that it’s a bit old now, so it might not work with newer versions of Nengo out of the box.)


I see. I was under the impression that aside from using TF, NengoDL was otherwise identical to Nengo core. And thank you for linking your thesis, it looks like a great resource for some other questions I was about to ask.

I noticed that those rules (PES, BCM and OJA) only modify connection weights. Does nengo provide (or plan to provide) any sort of hyperparameter optimization?

If you use only the “standard” Nengo simulator commands (i.e. those that are also available on nengo.Simulator, like, then NengoDL should behave the same as Nengo core. However, if you use the NengoDL-specific functions, like, then you’re doing something that’s not possible in Nengo core, namely optimizing over the whole model using backpropagation.

We currently don’t have any plans for built-in hyperparameter optimization. We’ll typically use hyperopt or nni to generate networks with various hyperparameters and optimize over them.

Great. Any resources/examples you could point to on using third-party hyperoptimization programs with Nengo?