How promising are the neuromorphic systems in deep learning domain?

Hello Everyone!

I came across this interesting article and it prompted few questions to me. Sorry for these probably vague ones, but I was curious to know more from the experts here.

1> Compared to the current TPUs, NPUs, and VPUs, how would you rank the current state-of-the-art neuromorphic hardware in terms of deep learning performance and power consumption? From the article it seems that the Google’s Edge TPUs are at the forefront.

2> The deep learning success of the neuromorphic hardware probably depends heavily on the advancements in the spiking research. Assuming that sufficient required advancements have been made in spiking research (similar to the present maturity in the current traditional deep nets), is there a possibility of neuromorphic hardware replacing the TPUs, NPUs, VPUs, or any other architectures in future? I completely understand that the replacement is not going to be in its entirety but what could be the extent of it?

3> If the answer to the above question is Yes, then how long do you think it will take for the neuromorphic systems to penetrate the technological industry to the same level (and beyond) as todays GPUs/TPUs/FPGAs have done?

4> Edit: Are there domains other than deep learning where neuromorphic systems would be the de facto? I guess for cognitive modelling?

Thank you for your time!

I think the application makes a big difference. One big advantage to neuromorphic hardware is that they (typically) have the memory closer to the computation. This makes it faster and less expensive to access that memory (to look up e.g. synaptic weights) than on more traditional hardware (e.g. CPU/TPU/GPU) where the synaptic weights are stored in a big pool of DRAM farther from the computation.

What this means in practice is that traditional hardware (particularly GPU/TPU) gets a huge advantage when processing multiple input examples (using the same network weights) simultaneously, because it can load the weights once and then apply them to multiple examples, greatly reducing the memory accesses. The GPU and TPU in particular have been further optimized for parallel processing.

Online processing (where you’re processing one input example at a time) is where neuromorphic hardware will have the greatest advantage. Where this will come most into play is when you want to do low-power processing of data in real time and in the field.

I know that doesn’t answer any of your questions, but I just wanted to point out that the answer to each question will depend on the application. For example, I don’t see neuromorphics being heavily used for deep learning training any time soon, but I could see them being used for deep learning inference on an edge devices in the near future.

Good to hear from you Eric! and thank you for taking a jibe at my questions. Any explanation which extends towards the clarification of my questions is very much appreciated. I am aware that my questions are somewhat indeterminate.

I agree with the neuromorphic systems being the frontrunner when it comes to online learning systems e.g. robotics, autonomous cars, etc. and this is fundamentally due to its hardware architecture. Although, do you (or others) have any numbers on the power consumption comparison of neuromorphic hardware with Google Edge TPUs (or any other device e.g. GPUs/FPGAs)? I am not that focused on the performance comparison with traditional deep nets as from the nengo-dl examples it seem pretty evident that spiking networks have similar accuracies.

Also, do you see an upper edge with neuromophic systems when it comes to implementing cognitive models like motor functions, working memory etc. on them, compared to the TPUs etc.? I guess neuromorphic systems would have an undisputed advantage here too (along with online learning) as their architecture is closer to that of our brain, which could enable us to develop better biologically plausible models?