Can't probe after modifying Model at runtime

I want to modify the learning rate of my learning rule at runtime and to do this I execute two Simulators one after the other and modify my model in-between:

model = nengo.Network( seed=seed )
with model:
    inp = nengo.Node( lambda t, x: train_neurons if t < time / 2 else [ -2 ] * neurons, size_in=1 )
    ens = nengo.Ensemble( neurons, 1)
    nengo.Connection( inp, ens.neurons, seed=seed )
    conn = nengo.Connection( ens.neurons, ens.neurons,
                             learning_rule_type=mOja( gain=1e6, beta=beta, noisy=0 ),
                             transform=np.zeros( (ens.n_neurons, ens.n_neurons) ) )

    ens_probe = nengo.Probe( ens.neurons )
    weight_probe = nengo.Probe( conn, "weights" )
    pos_memr_probe = nengo.Probe( conn.learning_rule, "pos_memristors" )
    neg_memr_probe = nengo.Probe( conn.learning_rule, "neg_memristors" )

with nengo.Simulator( model ) as sim_train:
    sim_train.run( time / 2 )

conn.learning_rule_type = mOja( gain=1e6, beta=beta, noisy=0, learning_rate=0 )

with nengo.Simulator( model ) as sim_test:
    sim_test.run( time / 2 )

Now the problem is that I can’t seem to probe my pos_memristors Signal during the second simulation. The build process for sim_test fails because the LearningRule signals haven’t been built by my custom build function when the probes are built.

Is there any way of circumventing this either by hacking the build process or maybe is there another way of modifying an object’s parameters at runtime that’s different than having to run two simulations?

This is related to a known issue, which makes it difficult to change the learning rate. Unfortunately, Aaron’s workaround won’t even work for you, since it involves scaling the error to affect the learning rate, but mOja doesn’t have an external error signal to scale.

If you want to hack it, you can look at how the learning rate is applied on the SimOja operator. On that operator, if you change the learning rate on the operator and rebuild the model, it will change the learning rate. (Unfortunately, you do have to still rebuild, i.e. make a new simulator, so that it remakes the step_simoja function. It could be possible to avoid this with more hacking by removing the old step_simoja function and re-calling SimOja.make_step for your SimOja operator.) To find the SimOja operator, you can loop through sim.model.operators. If you just have one Oja learning connection, it’s straightforward, otherwise you would need to match based on some attribute, for example .weights (i.e. you could ensure that the .weights signal on the operator is the same as the weights signal for your connection, which you could get with sim.model.sig[conn]["weights"], where conn is your connection).

Also, note that I’ve written this all out for the standard Oja learning rule, so you might have to tailor it a bit for your mOja rule if there are differences e.g. in how the learning rate is applied.

The learning rate is actually changed fine and if I remove these probes from the model (I forgot to include them in the original post) the build and simulation proceed as they should:

pos_memr_probe = nengo.Probe( conn.learning_rule, "pos_memristors" )
neg_memr_probe = nengo.Probe( conn.learning_rule, "neg_memristors" )

The signals being probed are defined in the build function for my custom learning rule and it seems (to me, at least) that something is going wrong in the order of the build process for sim_test as I get:
nengo.exceptions.BuildError: Attribute 'pos_memristors' is not probeable on <LearningRule modifying <Connection from <Neurons of <Ensemble (unlabeled) at 0x7fb026d269d0>> to <Neurons of <Ensemble (unlabeled) at 0x7fb026d269d0>>> with type <mOja at 0x7fb02e75d880>>.

This is where the custom Signals I want to probe are being defined:

The problem is that you cannot change the learning rule and rebuild, because there are references to the old learning rule (e.g. in the probes) that don’t get changed.

I think the easiest solution is to find/make a way to change the learning rate on the existing learning rule itself. Since this is a custom learning rule, you already have full control over how it’s built. I think the easiest is to make the learning_rate in the step function a reference to the learning_rate on the operator you create, and then change the learning rate on the operator. Here’s an example:

import numpy as np

import nengo
from nengo.builder.builder import Builder
from nengo.builder.learning_rules import build_or_passthrough, get_post_ens, get_pre_ens
from nengo.builder.operator import Operator


class MyOja(nengo.Oja):
    pass


class MySimOja(Operator):
    def __init__(
        self, pre_filtered, post_filtered, weights, delta, learning_rate, beta, tag=None
    ):
        super().__init__(tag=tag)
        self.learning_rate = learning_rate
        self.beta = beta

        self.sets = []
        self.incs = []
        self.reads = [pre_filtered, post_filtered, weights]
        self.updates = [delta]

    @property
    def delta(self):
        return self.updates[0]

    @property
    def pre_filtered(self):
        return self.reads[0]

    @property
    def post_filtered(self):
        return self.reads[1]

    @property
    def weights(self):
        return self.reads[2]

    @property
    def _descstr(self):
        return f"pre={self.pre_filtered}, post={self.post_filtered} -> {self.delta}"

    def make_step(self, signals, dt, rng):
        weights = signals[self.weights]
        pre_filtered = signals[self.pre_filtered]
        post_filtered = signals[self.post_filtered]
        delta = signals[self.delta]
        beta = self.beta

        def step_simoja():
            alpha = self.learning_rate * dt
            print(f"Alpha: {alpha}")

            # perform forgetting
            post_squared = alpha * post_filtered * post_filtered
            delta[...] = -beta * weights * post_squared[:, None]

            # perform update
            delta[...] += np.outer(alpha * post_filtered, pre_filtered)

        return step_simoja


@Builder.register(MyOja)
def build_oja(model, oja, rule):
    conn = rule.connection
    pre_activities = model.sig[get_pre_ens(conn).neurons]["out"]
    post_activities = model.sig[get_post_ens(conn).neurons]["out"]
    pre_filtered = build_or_passthrough(model, oja.pre_synapse, pre_activities)
    post_filtered = build_or_passthrough(model, oja.post_synapse, post_activities)

    model.add_op(
        MySimOja(
            pre_filtered,
            post_filtered,
            model.sig[conn]["weights"],
            model.sig[rule]["delta"],
            learning_rate=oja.learning_rate,
            beta=oja.beta,
        )
    )

    # expose these for probes
    model.sig[rule]["pre_filtered"] = pre_filtered
    model.sig[rule]["post_filtered"] = post_filtered


seed = 0
time = 0.01

neurons = 10
train_neurons = [1] * neurons

model = nengo.Network(seed=seed)
with model:
    inp = nengo.Node(
        lambda t, x: train_neurons if t < time / 2 else [-2] * neurons, size_in=1
    )
    ens = nengo.Ensemble(neurons, 1)
    nengo.Connection(inp, ens.neurons, seed=seed)
    conn = nengo.Connection(
        ens.neurons,
        ens.neurons,
        learning_rule_type=MyOja(),
        transform=np.zeros((ens.n_neurons, ens.n_neurons)),
    )

    ens_probe = nengo.Probe(ens.neurons)
    weight_probe = nengo.Probe(conn, "weights")
    pre_filt_probe = nengo.Probe(conn.learning_rule, "pre_filtered")
    post_filt_probe = nengo.Probe(conn.learning_rule, "post_filtered")


with nengo.Simulator(model, progress_bar=False) as sim:
    sim.run(time / 2)

    weights_sig = sim.model.sig[conn]["weights"]
    ops = [
        op
        for op in sim.model.operators
        if isinstance(op, MySimOja) and op.weights == weights_sig
    ]
    assert len(ops) == 1
    op = ops[0]
    op.learning_rate = 0

    sim.run(time / 2)

Thanks! Your proposal definitely works but I can see it causing me problems moving forwards, as everything needs to be done inside the same Simulator scope.

For example here, I rely on the separation of sim_train, sim_class, and sim_test to compute statistics vital to the learning process, after modifying/freezing the model between sim_train and sim_class.



I could rewrite the code to do everything inside the same scope and then slice the probes, but it all feels like a bit of a hack. I would really have like to keep clean, compatible Nengo code where I could swap learning rules without having to change any of the logic of the rest of the script.
Also, I don’t think this method would work with NengoDL as build_pre is only called during the initial build process and there is no obvious way I can see to change the internal learning_rate Tensor after that.

The other thing you can do is that even though LearningRuleType is a FrozenObject, and thus should not allow you to change parameters, you can still hack in and change parameters, for example:

import numpy as np

import nengo


seed = 0
time = 0.01

neurons = 10
train_neurons = [1] * neurons

model = nengo.Network(seed=seed)
with model:
    inp = nengo.Node(train_neurons)
    ens = nengo.Ensemble(neurons, 1)
    nengo.Connection(inp, ens.neurons, seed=seed)
    conn = nengo.Connection(
        ens.neurons,
        ens.neurons,
        learning_rule_type=nengo.Oja(),
        transform=np.zeros((ens.n_neurons, ens.n_neurons)),
    )

    ens_probe = nengo.Probe(ens.neurons)
    weight_probe = nengo.Probe(conn, "weights")
    delta_probe = nengo.Probe(conn.learning_rule, "delta")


with nengo.Simulator(model, progress_bar=False) as train_sim:
    train_sim.run(time / 2)

print(np.abs(train_sim.data[delta_probe][-1]).mean())

# change the learning rate
rule_type = conn.learning_rule_type
type(rule_type).learning_rate.data[rule_type] = 0

with nengo.Simulator(model, progress_bar=False) as test_sim:
    test_sim.run(time / 2)

print(np.abs(test_sim.data[delta_probe][-1]).mean())

Just be aware that if you use the same learning rule type instance for multiple connections, this will change the learning rate for all those connections (since the parameter value is associated with the learning rule type instance).

Also, I’m not disagreeing that being able to change the learning_rule_type on a connection would not be useful. It’s on our list of features to add. It will just be a little while before it actually gets fixed.

I guess I’ll just make do with what’s possible now, then :slight_smile: Is there any idea of a timeline?

It would be really useful to be able to also change ensemble parameters like neuron_type, as I’m attempting to do here:


(and I’ve just realised that all those changes I thought I was doing to “freeze” the model (lines 283-292) actually were probably not working, as doing conn.learning_rule_type = mOja( gain=1e6, beta=beta, noisy=0, learning_rate=0 does not stop learning, unlike hacking into the Operator, or the LearningRule as you suggested).

Changing the neuron type appears to work for me, as in the following example. What is the specific problem you’re having with changing the neuron type?

import numpy as np

import matplotlib.pyplot as plt
import nengo
from nengo.dists import Choice


time = 1.0
neurons = 10

neuron_type0 = nengo.LIF(tau_ref=0.01)
neuron_type1 = nengo.LIF(tau_ref=0.001)

model = nengo.Network(seed=0)
with model:
    inp = nengo.Node([10] * neurons)
    ens = nengo.Ensemble(neurons, 1, neuron_type=neuron_type0, gain=Choice([1]), bias=Choice([1]))
    nengo.Connection(inp, ens.neurons)

    ens_probe = nengo.Probe(ens.neurons, synapse=nengo.Alpha(0.01))


with nengo.Simulator(model, progress_bar=False) as train_sim:
    train_sim.run(time / 2)


ens.neuron_type = neuron_type1

with nengo.Simulator(model, progress_bar=False) as test_sim:
    test_sim.run(time / 2)

plt.plot(train_sim.trange(), train_sim.data[ens_probe][:, 0])
plt.plot(test_sim.trange(), test_sim.data[ens_probe][:, 0])
plt.show()

Ah, that’s good to know: I just assumed that switching the neuron_type would not work because trying the same for the learning_rule_type didn’t. The backend implementation for neurons seemed a bit more refined compared to that for learning rules, last time I looked, so it would make sense that it worked.

What if I wanted to remove the learning rule from the connection altogether in order to stop learning, instead of setting the learning rate to zero? Would it be a better stopgap solution? Would it be feasible?

Switching the learning_rule_type to None on the connection and making a new simulator should work as long as you don’t have any probes on attributes of the learning rule (i.e. you’d have to get rid of your pos_memr_probe and neg_memr_probe, though you could do that in between the train and test simulations by removing them from model.probes, where model is your nengo.Network that the probes have been added to). It would also be a problem if you had any (error) connections into the learning rule, but since your learning rule doesn’t have an error input (unlike e.g. PES, which does), that shouldn’t be a problem for you.

As you suggested, I tried removing the probes by:

with nengo.Simulator( model, seed=seed ) as sim_train:
    sim_train.run( time / 2 )

conn.transform = sim_train.data[ weight_probe ][ -1 ].squeeze()
conn.learning_rule_type = mOja( gain=1e6, beta=beta, noisy=0, learning_rate=0,
                                initial_state={ "weights": sim_train.data[ weight_probe ][ -1 ].squeeze() }
                                )
# remove probes from model
import re

model.probes = [ x for x in model.probes if not re.match( r"\w+memristors", x.attr ) ]

with nengo.Simulator( model, seed=seed ) as sim_test:
    sim_test.run( time / 2 )

but I’m getting:
nengo.exceptions.ReadonlyError: Network.probes: probes is read-only and cannot be changed

I also tried:
model.objects[ nengo.Probe ] = [ x for x in model.objects[ nengo.Probe ] if not re.match( r"\w+memristors", x.attr ) ]
but, although no exception is thrown, the model is not changed when running sim_test.

You can’t change the probes list to a new list; you need to modify the existing list.

Something like this should work:

to_remove = [ x for x in model.probes if re.match( r"\w+memristors", x.attr ) ]
for probe in to_remove:
    model.probes.remove(probe)

Nice, thanks!
I was hoping that removing the probes after the first simulation and adding a new set before the next simulation might work but, alas, it doesn’t as I get KeyError: <Probe at 0x7ff13ff05ac0 of 'pos_memristors' of <LearningRule modifying <Connection from <Neurons of <Ensemble (unlabeled) at 0x7ff13ff05640>> to <Neurons of <Ensemble (unlabeled) at 0x7ff13ff05640>>> with type <mOja at 0x7ff13ff05850>>> after the second simulation finishes.
I’ll just have to wait for official support in modifying a learning rule :slight_smile:

with model:
        # remove probes from model
        probes_to_remove = [ x for x in model.probes if re.match( r"\w+memristors", x.attr ) ]
        for probe in probes_to_remove:
            model.probes.remove( probe )
        model.probes.append( nengo.Probe( conn.learning_rule, "pos_memristors" ) )
        model.probes.append( nengo.Probe( conn.learning_rule, "neg_memristors" ) )