Nengo-dl simulator and Nengo simulator

I’m not well versed on the hardware side either, but I think there are a lot of other techniques being used in neuromorphic hardware in addition to reducing the need for floating point operations. Regardless, the reason why we think the gradient concept is so important for neuromorphics is so that you can more efficiently and effectively train spiking neural networks. In all of the ANN to SNN conversion techniques that I’m aware of, the goal is to minimize the performance difference between a network on the rate-based extreme of the gradient and a network on the spiking extreme of the gradient. By breaking down the problem in steps – so, going from 32-bit neurons to 16-bit neurons, then 16-bit to 8-bit, and so on – we may be able to train spiking networks (i.e., 1-bit neurons) faster and more effectively. This is a similar technique as has been exploited in many other aspects of machine learning, breaking a difficult problem down to easier-to-solve subproblems. Once the training procedure is done, the spiking network that you get is no different from a spiking network that you obtained through another trianing procedure, so it would be deployable on hardware like Loihi.

1 Like