But I don’t quite understand how should I use the trained weights, i.e., the signals passed by pre to post when teaching neurons are no longer passing signals to post? And, how to set teaching neurons to stop passing signals to post? I tried to set it this way:
teaching_neuron = nengo.Node(size_in=1, output=lambda t: e if t < 1 else 0)
But it didn’t achieve the result I wanted, am I doing it right? Is there a difference between sending no signal and sending a signal with a value of 0? I would appreciate it if you could provide me with examples or answer my questions.
While STDP is supported in Nengo, it differs from the other learning rules that are available in Nengo. STDP modulates the connection weights purely on the timing between the pre and post spikes. Unlike the PES, or Oja, or Voja learning rules, STDP does not require an external error signal. Because of this, you’ll need to understand exactly how you want to use STDP within your network. I’m not an expert on using STDP within neural networks, so you’ll have to do your own research there.
Since STDP operates solely with the spike timings, there is no real way to “stop” the modulation of the connection weights. As long as the spike timings meet the requirements for a change, they will change. This being the case, you’ll need to structure your network carefully. In the case of your network, I’m not sure about the exact implementation (are you using ensembles? or single neurons? what type of connections have you made?) so I can only speculate on how to “stop” the “teaching neurons”. I am going to assume that the “training neurons” are there to induce a spike in the post population, and thus, to “stop” this, what you’ll want to do is to inhibit the “teaching neurons”. Here’s an example on how you would do this:
Ah, since you are using a Nengo node for your “training neuron”, then that is the correct way to “turn off” the “training neuron”. In such a case, after t = 1, the node will output no value, which will in turn not induce any spikes in the post population. But, you have to keep in mind that the STDP learning rule is always on, so the connection between pre and post can cause spikes in the post population (even with no input from the “training neuron”).