How to achieve long-term potentiation and long-term depression

Hello everyone, I am a newcomer.
I am very interested in nengo, but encountered some problems in the process of using it.

  1. We all know that the main manifestations of synaptic plasticity in the brain are LTP and LTD, so how to use STDP to achieve synaptic plasticity in nengo?
  2. I want to build a complex model in which only a few connections can use supervised learning to train the weights. Can nengo do this? Are there any examples that can help me understand?
    If you can answer my questions, I would be very grateful. At the same time, all kinds of ideas are also welcome.

Hi @YL0910,

To answer your questions, we support multiple learning rules in Nengo, and the BCM rule in particular performs STDP calculations as part of the learning rule update.

Regrading examples of supervised learning, The Nengo documentation contains many examples of both supervised and unsupervised learning.

Thank you very much for your answer! I read the link you shared with me. I want to find a supervised learning rule based on STDP. Is there any research in this area?

I’m not familiar with any supervised learning rules based on STDP, although @tbekolay may have some insights. His Master’s thesis focused on learning in spiking neural networks with STDP.

Probably my most salient publication on the topic would be this one. You could achieve this in modern Nengo by setting learning_rule_type=[nengo.PES(), nengo.BCM()] on a connection, but part of the reason why we don’t have any examples doing this is because there isn’t a practical benefit to doing this over just nengo.PES() (that I can find).

Hello! @tbekolay I was going through the publication on hPES. I was wondering if we can classify MNIST with only unsupervised learning in Nengo with BCM as done in https://www.frontiersin.org/articles/10.3389/fncom.2015.00099/full.

It is probably possible! But I’m not aware of anyone who’s done it. My guess is that it will take some parameter tweaking to get working, but aside from that there’s no reason why it wouldn’t work.

Alright. I shall try to do with BCM. I am new to Nengo syntax so was finding it difficult to define training and evaluation procedure for reproducing the results.

1 Like

Once you have a start on the model, feel free to post it here for help getting it working! It’s hard for us to get a model started since we haven’t read the source paper, but once you have the structure it’s more feasible for us to iterate on the model.

1 Like

Sure. Thanks a lot. I will post it here.

Hello,Everyone!
I am exploring a new way of STDP supervised learning: there is a third signal between two neuron connections, when the third signal is positive (or when it exists) the connection weight increases, otherwise the connection weight Decrease. At the moment I only have the STDP program, any ideas/suggestions or examples are welcome!

Hi @YL0910, to me, it seems like what you are looking for is a variant of the PES learning rule. You can either use the PES learning rule as is, and apply a function on the error signal to convert a continuous valued signal into a discrete exist/not-exist signal, or you can write your own custom learning rule based on the PES learning rule. If you want to write your own learning rule, this forum post outlines how the learning rule logic is parsed by the Nengo simulator, and this forum post illustrates how to take an existing learning rule and make a custom version of it.

Thank you very much for your meticulous help! I built a custom learning algorithm, but I want to start this learning algorithm when there is an error between the actual output and the target output. Is there a function in nengo that can compare the output values of two Ensembles in real time?

I’m not entirely sure what you want to accomplish, since there are multiple ways to do this. You can either use a nengo.Node or nengo.Ensemble to computer the error signal. I’d suggest you look at the PES learning rule example to get an idea of how we accomplish this in our code.