Building a custom learning rule operator for Nengo DL

I am working on creating builders for nengo DL for a custom neuron type as well as a learning rule type. The neuron type is coming along fine but I am having some issues with the learning rule type. I need the initial values of the weights (as well as from another attribute). Now i know how to get the weights to the tensorflow simulator :

necessities for the builder:

@nengo_dl.builder.Builder.register(SimNAME)
class SimNAMEBuilder(OpBuilder):
    """Build a group of `.NAME` operators."""

    def __init__(self, ops, signals, config):
        super(SimNAMEBuilder, self).__init__(ops, signals, config)

    self.weights_data = signals.combine([op.weights for op in ops])

In the simulation builder for the regular Nengo ( class SimNAME(Operator) ) i can get the initial values by

    initial_weights = self.weights.initial_value

Affter i read the weights to self.weights. The problem is that “.initial_value” does not work with the code for the tensorflow simulator. Not like this:

    self.initial_weights_data = signals.combine([op.weights.initial_value for op in ops])

And not like this:

    self.weights_data = signals.combine([op.weights for op in ops])
    self.initial_weights_data = self.weights_data.initial_value

Does anyone know how to solve this issue and get the initial values into the tensorflow simulator?

I tried figuring it out with the documentation from nengo_dl.learning_rule_builders but these builders do not use initial values anywhere.

I use Nengo version 2.8.0 and Nengo DL version 2.2.2

Hi Chiel,

Can you provide a bit more information about how you intend your learning rule to work? Why do you need the initial values? Note that these are the initial values at the very start of the simulation. Usually, the only values for the weights used by learning rules are the current values of the weights.

As an example of a learning rule that makes use of the current values of the weights, you might want to look at the SimOja NengoDL builder, which implements the Oja learning rule (the math is here).

As for your specific problem, the reason that signals.combine([op.weights.initial_value for op in ops]) doesn’t work is that op.weights.initial_value is a Numpy array, not a Nengo Signal. We don’t have signals for the initial weights is because they’re not something that we use throughout the simulation. That said, you could make TensorFlow Tensors out of these Numpy arrays (e.g. with tf.constant), and store/use those, if you really need them in your learning rule.

Hi Eric,

Again, thank you very much for your reaction. The project I am working on is a nengo model that implements short term plasticity. For the learning rule this means that the weight of a connection depends on the initial weight as well as some values, calcium and neurotransmitter/resources levels,from the pre synaptic neuron. My project uses the model from https://github.com/Matthijspals/STSP as a basis, this page also explains the math.

I need to write the nengo DL builders because the new model I made does not seem to work with nengo OCL, as you noted on another question I posted. It is impractical for it to be run using the regular Nengo simulator as it would take way too much time.

You’ll need to make the initial values into tensors then with tf.constant. All that signals.combine does with Tensors is concatenates along the first axis, so it’s probably easiest to do that first in Numpy, and then make the big concatenated set of weights into a Tensor.

Frankly, there’s a lot of extra complexity in the NengoDL builder classes to allow multiple operators (ops) to be merged into one operator for speed. But when you’re implementing your own operator, you often don’t care as much about speed. If you add the following to your builder class, it will make sure that operators are never merged for your builder (if you’re inheriting directly from OpBuilder, this is the standard definition, so you don’t need to add it again):

    @staticmethod
    def mergeable(x, y):
        return False

Then, you should be able to put assert len(ops) == 1 in __init__ and do everything just based around one operator, so you don’t have to worry about combining multiple initial values into one Tensor.

1 Like

Could this help with the issue I’m having here: [NengoDL] Signals.scatter() shape issue ?