Is it possible to implement Gradient Descent in Nengo?

Is it possible to implement Gradient Descent in Nengo (not in Nengo DL)? I know the steps to add new learning rule.

I want to learn how to add new objects (in this case: user defined learning rule) to Nengo. Initially, I want to try with something simple, and then would like to implement my own learning rule.

Since, gradient descent is a well know and easy to implement learning rule, I thought of adding gradient descent learning rule (as a new object) to Nengo Framework.

I was wondering if gradient descent (since the network being optimized needs to be differentiable) can be tested later using a simple example with rectified linear or sigmoid neurons, as these are differentiable non linearities, in Nengo.

Thank you :slight_smile: