What do we get from the symbolic processing?


I am interested in SPA.

I read the publications of NEF and How to build a Brain, but I have a question.
Since I am not native, there is a possibility that I can not read English at all.

The question is about the symbol approach and the connectionist approach.
SPA takes the standpoint of ICS and implements symbolic representations based on network activity values ​​and unbinding based on transformation principles. In Nengo examples, I feel that binding and unbinding processing can be implemented in a biological brain model.

However, I am also interested in the field of deep learning. As you know, deep learning has been used to solve various problems. With this background, I do not know the advantages of realizing symbol processing or what that will lead in the future.

(1) Is the connectionist approach equal to deep learning?

(2) What tasks should I solve to appeal brain-likeness or nengo-power after symbol processing can be processed on a rule-base?


Coming at this from the field of deep learning, it might help to use an excerpt from the words of LeCun, Bengio & Hinton (widely regarded as “the fathers of deep learning”) themselves:

Ultimately, major progress in artificial intelligence will come about through systems that combine representation learning with complex reasoning. Although deep learning and simple reasoning have been used for speech and handwriting recognition for a long time, new paradigms are needed to replace rule-based manipulation of symbolic expressions by operations on large vectors.

My view on this, that I hope is consistent with those above and with the authors of HtBaB (although they may wish to chime in on this), is that symbolic processing adds to deep learning by imposing structure on the way in which the system manipulates higher-order constructs.

Symbol processing is powerful for tasks that require high-level coordination between many different moving parts that need to speak with each other in a common “language”. Traditionally, it is very difficult for deep learning and backpropagation to naturally learn systems that do this on their own starting from nothing (which is why LeCun et al. are mentioning this at the very end of their Nature paper).

It might also help to know that, since the HtBaB book came out, Nengo has been integrated with Tensorflow to facilitate a hybrid combination of these two approaches (nengo-dl). By backpropagating through systems that process symbolic concepts, deep learning may be used to automatically find new connections and ways to leverage these higher-order systems for many other tasks.