RLS learning sounds interesting idea I will definitely try.
So I am trying to replicate @tcstewar’s this work with nengo.
this paper presents two models
- External Model which learns to use external symbol systematically to optimise cognitive(Foraging) task.
- Internal Model which learns to use internal representation(internal symbol like analogies) systematically to optimise cognitive (Foraging) task.
To test Nengo RL I am thinking GYM could be easy to go platform and if it performs optimally with GYM then it might perform with Foraging task which is presented in this paper.
Motivation to use Nengo is because of its claim about biologically plausibility and also it has interesting features like Semantic pointers which I think it could be useful as a internal representation.
So I have nengo implementation of terry’s idea but currently in terms of performance it is not working well.
I have also DQN with pyTorch implementation it is giving some interesting results but then nengo_dl also could give similar results but then it will be just some different implementation of DQN.
So my bigger question with nengo is how biological neuronal system uses internal and external representations to optimise task?
I am also interested in spacial cognition how grid cell place cell kind of structure could be represented using neurons(Internal representation of world). Is there any paper on it?