Hierarchical Temporal Memory vs. NEF/SPA

Hi @michaelklachko, doing a quick browse from their most recent white paper you linked above, some examples of worrying non-plausible simplification:

  1. All of the inputs are binary vectors. Similarly for connections, if the ‘permanence’ value of a synapse is above a threshold, the connection is assigned a value of 1, otherwise 0. As actual system inputs and neural connections are definitely not binary, this makes it pretty hard to model in a biologically plausible neural network.

  2. In the second step of their spatial pooling (Page 35, Phase 2) they find the k most active columns, to apply learning to only these columns. Dynamically, setting up WTA with lateral inhibitory connections is notoriously very tricky, and isn’t something that can be done in a single time step. On top of that, controlling the learning so that it’s only applied after the network has settled on a set of winners is a whole other issue. It might be the case it works running the whole time as the WTA circuit settles, but the dynamics are complex and can’t just be assumed to work.
    (Side note: robust WTA can be implemented using subcortical circuits, e.g. the Nengo Basal Ganglia model, which is an implementation of a circuit detailed by experimentalist work from Gurney, Prescott, and Redgrave)

  3. Another potential issue is their distinction of activation due to lateral or feedforward connections. They put a neuron in a ‘predictive state’ if it receives activity over a lateral connection, and an ‘active state’ if it receives activity over feedforward connections. I’m not sure what this translates to in an actual neural model. Maybe lateral connections modulate the STDP from feedforward connections. But it’s another thing that gets glossed over by not ‘getting bogged down in details’ and would need to be sorted out in a more biologically constrained model.

So, that’s a few of the issues, but probably the best way to get familiar with them is to do your own HTM implementation. :slight_smile:

I started reading the course notes, but immediately in the introduction, there’s “Problems with current approaches” section, which describes large-scale neural models (Blue Brain, etc), and cognitve models (ACT-R, etc), yet completely ignores the elephant in the room: HTM. I also looked through the “How to Build a Brain” book, and couldn’t find any mention of Numenta’s work there either. Both Spaun and HTM fall in between detailed brain simulators and cognitive models. In fact it seems that there is no one else in this category!

I recommend continuing reading! Insomuch as not being mentioned in How to Build a Brain, I don’t understand how HTMs would be considered ‘an elephant in the room’. Can you elaborate on their relevance to a discussion on cognitive models? I’m only familiar with their application to pattern / anomaly detection.

Given an HTM implementation in a neural network, one of the really fun things about Nengo is that we can then plug that right in. So we if there is a neural net implementation of HTMs we can add pattern detection as a feature of Spaun for sure.

Hopefully that helps!

3 Likes