Learn synaptic delays

Hi !
I want to implement a learning rule that updates synaptic delays.

To achieve that goal, I need (correct me if I’m wrong) 1) to implement my custom synaptic model that implements the delays and that is a subclass of nengo.synapses.Synapse and 2) to implement my custom synaptic delay learning rule that is a subclass of nengo.learning_rules.LearningRuleType.

I achieved 1).

Now 1) is it possible to update other variables than weights with a learning rule ?
And 2) how to update the delays of my custom synapses with a learning rule ?
It seems that the only variables available to a synapse is the output activity of the presynaptic neurons.

Hi @Adri, and welcome to the Nengo forums. :smiley:

To answer your questions:

That’s correct. To implement a custom synaptic model, you’ll want to make a subclass of the nengo.synapses.Synapse class.

Implementing a custom learning rule that works with your custom synapse is a little bit more complicated. The learning rule implementation (actually, almost all Nengo objects are implemented this way) consists of 3 parts:

  • The interface code
  • The builder code
  • The operator code

The interface code is what the Nengo user uses to create and configure the Nengo objects. This is what is contained in the nengo.learning_rules module, and the subclasses of nengo.learning_rules.LearningRuleType are variants of the generic Nengo learning rule interface. In this interface class, you’ll want to provide a mechanism with which you can store the user’s configuration of the learning rule (e.g., what the learning rate is, or what the synapse values are, etc.)

The learning rule builder code is what Nengo uses to take the learning rule parameters and builds the corresponding Nengo objects and Nengo operators used to implement said learning rule. The learning rule operator code is what Nengo actually runs when the simulation is running (i.e., these are the functions that are called to modify the parameters used by the learning rule). Both the builder and operator code for built-in learning rules can be found within the builder/learning_rules.py file.

To construct your custom learning rule, you’ll need to have all 3 components. You can see an example of how a custom learning rule is implemented in this post.

By default, the only variables that can be updated with a learning rule are the encoders, decoders and weights. It should be possible, however (caveat, I haven’t tested it, so I’m not 100% sure), to write your builder and operator functions to work on the synapses instead.

This depends on your implementation of the delay synapse. The operator object for Nengo synapses (the SimProcess operator) calls whatever function that is provided to it by the make_step function of the synapse class. If you configure the step function to use attributes of your delay synapse class, then modifying those attributes in your learning rule should (once again, haven’t tested it…) also cause the behaviour of the synapse to change.

If I get the chance to test this type of custom learning rule, I’ll post a reply to this thread.

Hi @xchoo, thank you for your detailed answer, much appreciated !

I just find out that implementing delays (instantiated as a buffer) in a Synapse object would cause big efficiency issues. As the synaptic connections are handled pointwise (1 pre to 1 postsynaptic neuron), I won’t be able to take advantage of numpy operations for the synaptic delay buffer. Instead I would have to instantiate and update a buffer for each connection, and this would be catastrophic for a fully connected network.

Here is some non-nengo code to instantiate a buffer for the delays and to update them, taking advantage of vectorized operations :
cylindrical_buffer.ipynb (78.3 KB)

Thus my question is : is it possible to bypass the use of synapses to implement this buffer so that it can benefits from vectorized operations on the whole set of connections ?
In this post you mentioned the signal flow :

signal input → \times connection weight → synapse applied → neuron non-linearity → signal output

What I would like to do is :
signal input → buffered (delayed) signal → \times connection weight → neuron non-linearity → signal output

Or alternatively :
signal input → \times connection weight → buffered (delayed) signal → neuron non-linearity → signal output

In addition to the vectorization problem, I would also need to have access to the delays to update them with an array of Delta_delays. Would it still be possible to implement a learning rule (interface +builder + operator) for a non-synaptic object ?

That would be really great, thanks !

Depending on how your network is structured, Nengo uses one synapse per connection. Since connections can be between ensembles of neurons, it should be possible (though, I haven’t tried it yet) to implement your delay synapse using the vectorized data being passed along the connection.

However, if you have multiple ensembles you want to connect together, then you will start

I’m not sure this would help (it would definitely be easier to test) is to use a nengo.Node to emulate the behaviour of the delay, without needing to write your own synapse class. As an example:

def delay_func(t, x):
    <insert code for delay>

with nengo.Network() as model:
    ...
    delay_node = nengo.Node(delay_func)
    nengo.Connection(ensA, delay_node, synapse=None)  # None on synapse is important here
    nengo.Connection(delay_node, ensB, synapse=None)  # None on synapse is important here
    ...

In the example code above, the delay_node is used to delay the signal going from ensA to ensB. Note that the None for the synapses is important since you don’t want to apply any filtering to this signal.

With this approach, you can “concatenate” multiple connections into one connection by using the index splicing feature on connections. In the example below, there are two connections ensA -> ensB, and ensC -> ensD:

with nengo.Network() as model:
    ...
    delay_node = nengo.Node(delay_func)  
    # Note that the `delay_func` has to return the correct vector size 
    # (which is ensA.dimensions + ensC.dimensions)

    # ensA -> ensB connection
    nengo.Connection(ensA, delay_node[:ensA.dimensions], synapse=None)
    nengo.Connection(delay_node[:ensA.dimensions], ensB, synapse=None

    # ensC -> ensD connection
    nengo.Connection(ensC, delay_node[ensA.dimensions:], synapse=None)
    nengo.Connection(delay_node[ensA.dimensions:], ensD, synapse=None

If you use the nengo.Node approach to implement your delay, you can still implement a “learning rule”, with the caveat that you can’t use the built in Nengo learning rule API. Instead, you’ll need to build in the learning rule logic into the delay_func. To give the delay_func access to the training signal, you’ll need to feed it to the delay_node as an additional dimension on the input signal. In the example code below, I provide the “error signal” as index 0 of the input signal to delay_node (although, you can stick it at the end as well, you just have to get the indices correct):

def delay_func(t, x):
    error_sig = x[0]
    <insert code for delay + learning>

with nengo.Network() as model:
    ...
    delay_node = nengo.Node(delay_func)  
    # Note that the `delay_func` has to return the correct vector size 
    # (which is ensA.dimensions + ensC.dimensions)

    # ensA -> ensB connection
    nengo.Connection(ensA, delay_node[1:ensA.dimensions+1], synapse=None)
    nengo.Connection(delay_node[1:ensA.dimensions+1], ensB, synapse=None

    # ensC -> ensD connection
    nengo.Connection(ensC, delay_node[1+ensA.dimensions:], synapse=None)
    nengo.Connection(delay_node[1+ensA.dimensions:], ensD, synapse=None

    # error connection
    nengo.Connection(error, delay_node[0])  
    # Up to you to apply a synapse here or not. This synapse filters the error signal

With the approach above, you’ll need to bake in all of the learning rule parameters into delay_func. Although, it’s equally valid to create a class and pass a method of that class to the nengo.Node to run. Something like so:

class LearnedDelay:
    def __init__(self, <delay and learning parameters>):
        ...

    def step(t, x):
        error_sig = x[0]
        delta = error_sig * self.learning_rate + <...>


with nengo.Network() as model:
    ....

    learned_delay = LearnedDelay(size=ensA.dimensions + ensC.dimensions)
    delay_node = nengo.Node(learned_delay.step)

Note that my suggestions are based on my understanding of your problem, in which you want to apply this delay to multiple Nengo connections. As I mentioned above, the nengo.Synapse approach should work if you just want to do it on a single (or some fairly limited number of) connections. Also note that there may be more efficient implementations since Nengo it fairly flexible. :smiley: