Tuning an integrator for better stability


In the publication “Fine-Tuning and the Stability of Recurrent Neural Networks” by MacNeil and Eliasmith, PES is applied to the recurrent connection of an integrator to improve stability. If I understand correctly, stability means the amount the integrator drifts when maintaining a value. I was hoping to make a toy example of this in Nengo 2.0, but I feel like I might be doing it wrong.

Given the Network:

import nengo

class Integrate(object):
    def __init__(self, dt=0.001):
        self.last_val = 0.
        self.dt = dt
    def step(self, t, x):
        self.last_val += x * self.dt
        return self.last_val

tau = 0.1
integ = Integrate()

model = nengo.Network()
with model:
    input_sig = nengo.Node(
            0: 0,
            0.2: 1,
            0.6: 0,
#             2: -2,
#             2.4: 0,
#             4: 1,
#             4.4: 0
    ens = nengo.Ensemble(15, dimensions=1)
    integ_node = nengo.Node(integ.step, size_in=1)
    error = nengo.Ensemble(100, dimensions=1)
    error_node = nengo.Node(size_in=1)
    output = nengo.Node(size_in=1)
    nengo.Connection(input_sig, ens, synapse=tau)
    nengo.Connection(ens, ens, synapse=tau,
    nengo.Connection(input_sig, integ_node, synapse=tau)
    nengo.Connection(integ_node, error, transform=-1, synapse=tau)
    nengo.Connection(output, error, transform=1, synapse=None)
    nengo.Connection(integ_node, error_node, transform=-1, synapse=tau)
    nengo.Connection(output, error_node, transform=1, synapse=None)

    nengo.Connection(input_sig, ens, synapse=None)
    conn_out = nengo.Connection(ens, output, function=lambda x: np.random.random(1),
    # Error connections don't impart current
    error_conn = nengo.Connection(error, conn_out.learning_rule)
    inhibit = nengo.Node([0])
    nengo.Connection(inhibit, error.neurons, transform=[[-10]] * error.n_neurons, synapse=0.01)

I was hoping to see a difference in the error of the value represented by the integrator over time if learning was disabled. However, it seems pretty effective even without learning. Is there a better way to show learning on a recurrent connection?


Wait. The error function in this model makes no sense. So my new question is how exactly is the error function supposed to be calculated in this case?