I was wondering if there has been any implementation of Nengo that uses a system similar to grid cells and place cells in the brain in order to integrate velocity/direction inputs. The closest thing to what I’m looking for is: Spiking ratSLAM, however the document itself is not that detailed.
To give some more information, I would like to use some kind of attractor network, where each cell represents a particular location or orientation. Is there an efficient way to use such kind of ‘place code’ instead of the distributed representation Nengo normally uses? I imagine it should be possible by having an ensemble that has a function/weight matrix that ‘bins’ the incoming vector representation.
Thanks a lot!