Example, I want to have 3 cloned networks and 1 main network working on a supervised learning task, then I want to soft-update the parameters of the main network by the parameters of the cloned networks after a certain simulation time.
I’m not clear on the scenario you’re describing. Can you make a drawing of your network layout and/or show some code?
I don’t have code yet. The idea is simple:
- I want to have one type of network, the architecture is not important.
- I want to have copies of this network
- I want to simulate the copies separately, learning the weights of the connections based on some kind of error function
- After N seconds I want to average the weights of the copies and se the weights of the main network with the average
Are you using a Nengo network or just a normal Keras network? It’s possible with both, but I can’t imagine the results being very good. Taking the average of a bunch of network weights isn’t a very effective way of combining the knowledge of three separate networks. Usually the networks are combined to form a consensus.
Did you mostly just want to know if it was possible or not?
I just wanted to know if it was possible, to average was just an example of usage, but yes, I would like to combine the parameters and then update the main network. I am using a Nengo network.
This kind of usage is possible, but only with the reference backend (not Nengo OpenCL or Nengo SpiNNaker, to the best of my knowledge). And while it’s possible, Nengo is not currently designed to be used this way, so the way to do it will feel like a bit of a hack. We will be doing some backend reorganization sometime this year though, which may make this use case more comfortable (though it’s still unlikely that it will be supported by backends other than the reference backend).