it's interesting to take the chance to look closer at things again! edit: heads up I haven't read the comments in the Numenta thread yet. I'll try to read them over asap.
OK, I found Numenta's BAMI manual (which has updates in the algorithm sections as recent as February 2017, so for sure I've got the right one this time ) Are there any particular changes in the most recent papers that I should check out?
Jumping into things, relevant to biological plausibility, the BAMI says that HTMs are comfortable leaving out biological detail / constraints and moving to higher levels of abstraction when the lower levels are understood, and I agree that's fine. In fact, given computational constraints it's a requirement for exploring higher functioning of the brain with a grounding in biological constraints.
To that end, though, I would want to be certain that (for example) the previous mentioned questions don't give rise to any problems in implementation under more stringent biological constraints. I'm not aware of such models that resolve the issues, but also there's more work out there than anyone can keep up with and it's very possible it was handled a long time ago and I just haven't seen it. (Please link if you know them I would be very interested to read!)
I agree we that hashing out the equivalences among HTM, Nupic, Nengo, Spaun, and the NEF is a good idea! I'll try to recap (please forgive the repetition).
Your assessment of HTM matches my understanding after reading more, in that it's a set of features models of the cortex should contain (e.g. sparse distributed representations, sensory encoders, on-line learning, etc), and not a specific implementation / cortical model.
NuPIC I'm not familiar with, is it fair to say that it is a platform for hooking up / building systems using implementations of some of the networks and circuits that fall into the HTM theory?
A bit of clarification between Nengo and the NEF. The NEF is a set of methods for implementing differential equations in feedforward and recurrent distributed systems. Essentially a means of optimizing the connection weights in a neural network to perform some specific functionality (we refer to it as a 'neural compiler' often).
Nengo is a neural simulation platform that provides an API (and GUI) for modellers to build neural models, and then run these simulations on a bunch of different backends (like CPU, GPU, FPGA, neuromorphic hardware, etc). The API lets users take advantage of the NEF, but also supports other network training methods (such as deep-learning, etc). Often it's characterized as only supporting the NEF but it's more!
Spaun is a functional brain model implemented in Nengo. HTM ideas could be implemented in Nengo. Probably parts of Spaun could be implemented in NuPIC. Yup yup yup.
Hmmm I would be careful though not to call HTM a functional model. I think your previous description of it is more accurate, as a sort of set of checkmarks that cortical models need to meet. The BAMI introduction says that some parts of HTMs can be written out formally, but more these are guiding ideas and a lot of it can't be captured mathematically (...another statement is that computers can't be described purely mathematically, which I would be interested to hear more about).
Part of a functional model is wedding to some specific implementation details, which HTM doesn't do. To that end Spaun and HTM aren't directly comparable. Maybe a more appropriate comparison would be with Dr. Rick Granger's theory on cortical-subcortical loops? Even that still isn't really appropriate because it's a specific algorithm in addition to a general theory of function.
So, of all of the above, I think that comparisons between Nengo and NuPIC are appropriate, and to compare something to Spaun it should be an explicit implementation that generates predictions about the how the brain operates. Maybe this addresses the original question posed?
But as long as I'm going on with a giant rambling post that just addresses various topics without an overarching thesis: this kind of ties in to a larger discussion about the utility of modeling in general.
I was talking with another research group with more of an engineering bent, and presented a model I had worked on to which they said 'OK, so...what? What does this brain model gain us? It just matches some data that we'd expect any model to match.' After drying my tears and regaining composure, we came around to what's really useful about brain models is when they make specific, testable predictions about how the brain operates. Ideally these predictions should be in direct contention with another implemented model, such that an experiment can be devised that will provide support for one underlying set of functions or the other.
Along those lines, I would be very interested to know about HTM implementations that make these kinds of predictions!