Modifying the number of neurons at runtime

I’d like to change the number of neurons in an Ensemble at runtime, depending on a specific condition. Is this possible in Nengo? If so, how can I do that?

If not, do you know of any backend frameworks or other simulators where that could be implemented? Could you link some tutorials?

@Eric is there any chance you’d have some insight on this?

Changing the number of neurons is not possible at runtime.

That said, you may be able to change weights to effectively change the number of neurons. Specifically, if all weights out of a neuron are zero, then that neuron will have no effect on the rest of the network and so it is effectively removed.

The best way to do this is with a learning rule, which you can use to arbitrarily control and adjust the values of weights. However, if you’re looking for a more “quick and dirty” solution, it is possible (at least in the nengo.Simulator backend) to change weights by manually adjusting the signals. Something like sim.signals[sim.model.sigs[connection]["weights"]] = new_weights, where sim is your nengo.Simulator and connection is the nengo.Connection for which you want to modify the weights.

Hi @Eric ,

Thanks for the reply! If I wanted to change the connection weights depending on an input value
(probably from a Node using a slider in Nengo GUI) would that be possible? If so, could you give me a hint as to how it could be done?

So you can either do that with a learning rule, or with the hacky way I outlined. Here are examples of each.

Note that for the example with the PES learning rule, this is different than the way that we normally use PES. We typically feed the error into the PES learning rule (i.e. the actual value minus the target value) so that the learning rule adjusts weights to reduce the error. The way I’ve set things up here, the modulator node is going straight into the node, so that a higher value will increase the weights and a lower value will decrease the weights (or maybe vice versa, since PES tries to use gradient decent to minimize the error). Also note that PES involves multiplying the error by the encoders, so the weights on individual neurons will be adjusted differently depending on their encoders.

import matplotlib.pyplot as plt
import numpy as np

import nengo
from nengo.processes import WhiteSignal

n_neurons = 100
d = 1
tsim = 10

# --- adjust weights with PES learning rule
with nengo.Network(seed=0) as net:
    u = nengo.Node(WhiteSignal(period=tsim, high=0.5))
    modulator = nengo.Node(lambda t: -1 if t > 5 else 0)

    a = nengo.Ensemble(n_neurons, d)
    b = nengo.Ensemble(n_neurons, d)

    nengo.Connection(u, a, synapse=None)
    c = nengo.Connection(a, b, learning_rule_type=nengo.PES(learning_rate=1e-4))

    nengo.Connection(modulator, c.learning_rule)

    up = nengo.Probe(u, synapse=0.03)
    bp = nengo.Probe(b, synapse=0.03)

    weight_p = nengo.Probe(c, "weights")
    delta_p = nengo.Probe(c.learning_rule, "delta")

with nengo.Simulator(net, seed=1) as sim:
    sim.run(tsim)

t = sim.trange()

plt.figure()
plt.subplot(211)
plt.plot(t, sim.data[up])
plt.plot(t, sim.data[bp])

plt.subplot(212)
plt.plot(t, np.abs(sim.data[weight_p]).sum(axis=-1))
# plt.plot(t, np.abs(sim.data[delta_p]).sum(axis=-1))

# --- adjust weights with hacky node
sim_reference = [None]
conn_reference = [None]

def modulator_fn(t):
    if t > 5:
        sim = sim_reference[0]
        conn = conn_reference[0]
        w = sim.signals[sim.model.sig[conn]["weights"]]
        sim.signals[sim.model.sig[conn]["weights"]] = w + sim.dt * 0.01

with nengo.Network(seed=0) as net:
    u = nengo.Node(WhiteSignal(period=tsim, high=0.5))
    modulator = nengo.Node(modulator_fn, size_out=0)

    a = nengo.Ensemble(n_neurons, d)
    b = nengo.Ensemble(n_neurons, d)

    nengo.Connection(u, a, synapse=None)
    c = nengo.Connection(a, b, learning_rule_type=nengo.PES(learning_rate=1e-4))
    conn_reference[0] = c

    up = nengo.Probe(u, synapse=0.03)
    bp = nengo.Probe(b, synapse=0.03)

    weight_p = nengo.Probe(c, "weights")

with nengo.Simulator(net, seed=1) as sim:
    sim_reference[0] = sim
    sim.run(tsim)

t = sim.trange()

plt.figure()
plt.subplot(211)
plt.plot(t, sim.data[up])
plt.plot(t, sim.data[bp])

plt.subplot(212)
plt.plot(t, np.abs(sim.data[weight_p]).sum(axis=-1))

plt.show()

Hi @Eric ,
Thank you for the reply, it was very helpful.

  1. If I wanted to have the regular connection values, then make them zero for some simulation steps, and then return them back to normal to continue operation, would that be possible? I tried simply changing the modulator to

modulator = nengo.Node(lambda t: -0.5 if t < 5 or t>7 else 0)
but this doesn’t seem to work.
2) I also don’t understand why the max value upon enabling the connection is 1.5 (I assume its some sort of saturation(?))