Hi @nrofis,
In regards to your questions:
You are correct in the observation that we do not have any examples of reinforcement learning implemented in NengoDL. For reinforcement learning, we tend to use just the regular Nengo to construct our models.
This is correct. NengoDL was designed to run TensorFlow (or Keras) models in the Nengo ecosystem, and the simulation environment of Nengo differs greatly from TensorFlow. In Nengo, the neural network simulations (note: simulation differs from training) are performed using discrete timesteps and neuron currents are simulated.
As mentioned above, we don’t have any examples of reinforcement learning in NengoDL specifically. However, we do have some examples of reinforcement learning models constructed for Nengo itself.
- The Nengo docs on learning
- The NengoFPGA RL agent example
- This forum thread: Reinforcement Learning in Nengo - #16 by simondlevy
If you have a TensorFlow/Keras RL neural network that you’d want to simulate in NengoDL, it is possible to do so, but it will require some work.
To do RL in Nengo, you’ll first need a Nengo compatible neural network. The first approach you can take is to re-implement the TF/Keras model in native Nengo code. The NengoDL documentation provides a good example / comparison of the two model languages with each other, and you can use that as a starting point on how to re-write your TF model as a Nengo model. Alternatively, you can also use the NengoDL converter to convert your TF model to a Nengo network.
Once you have a Nengo network, you’ll then need to figure out what kind of learning rule you’ll want to use to perform the learning. Typically, we use the PES learning rule, and that seems to suit most of our needs. You can, however, also implement your own learning rule, and attempt to replicate the functionality of TF’s GradientTape
in this way (i.e., write a learning rule that computes the gradients of the inputs and uses it to compute some sort of error signal?).