Training multiple connections at the same time

Hello. I am working on a small project an I am trying out a few things with nengo. I have read through to this example here on how to train a connection. I tried to adapt it to train multiple serial connections (A->B->C) but I haven’t found a way to do it. I was wondering if there was a way to do this using only nengo.

I am aware of nengo_dl but there seems to be a distinct lack of examples with it. If it is possible, I wish to have an example on how to use the training algorithms in nengo_dl.

Hi @Alperen, welcome to the forums! Conceptually, the ability to do this kind of serial connection learning is possible, but the way in which to do it with Nengo is not the same as you would do with a traditional ANN. The reason for that is because we respect more biological constraints than traditional ANNs.

The error-minimization rule that is described in the example you linked is called PES. In contrast to other learning rules that minimize error (notably backpropagation), PES imposes the following additional constraints:

  1. The error signal must be computed by the network itself.
  2. The error signal must be explicitly projected to the connection that minimizes that error signal.

In the Nengo examples, we go through how to build networks to compute the error signal for simple functions like communication channels and multiplication. If you wanted to faithfully reproduce backpropagation, then you would build a network with ensembles computing two error signals: the hidden-output error signal, and the input-hidden error signal. Those error signals would be projected the hidden-output and input-hidden connections, respectively.

If you were to do this, you would find that some quantities are difficult to calculate in a Nengo network because of other constraints. For example, in order to compute error signals for the purpose of backpropagation, you need to know the derivative of the neural activation function. This isn’t information that a neural network would generally be able to know (a neuron computes a function on its inputs, but doesn’t have meta-knowledge of what function that might be), though it can be approximated with several different types of networks (see Tripp & Eliasmith, 2010 and Aaron Voelker’s work).

Stepping back a bit, while it may seem like backpropagation is something that a neural simulator with AI applications cannot do without, the NEF and PES rule fill the same role as backpropagation in most cases. Typically you use backpropagation to learn some mapping between input-output pairs. In traditional neural networks, this is done by setting up one or more layers of “hidden” neurons between the input and output layer in order to capture the nonlinear interactions between neurons in the input layer, where those input layer neurons each represent one input signal.

In Nengo, we typically use distributed representations in which many neurons represent each input signal. If there is reason to believe that there will be nonlinear interactions between input signals, then we have neurons represent multiple signals (see the combining example). By doing this, we no longer need a layer of hidden neurons between the input and output layers; we can compute nonlinear functions in a single input-output connection (see the multiplication example).

What if we don’t know the function mapping inputs to outputs? We can approximate it automatically by specifying the inputs and outputs in the connection itself:

nengo.Connection(pre, post, eval_points=inputs, function=outputs)

Or we can learn the mapping online by making Nengo objects to compute the error signal as the simulation is running (as in the communication channel example, or the product learning example).

To say all this in a slightly different way, minimizing error in order to compute functions is at the core of how Nengo works, and doesn’t require the multi-layer architecture that other ANNs use. In order to give better advice as to how to use Nengo to solve your problem, we would need to know more about the problem being solved. There are situations where many layers and complex training techniques yield far better results; these things have been done with Nengo, but setting them up has not yet been packaged into a simple example (see, for example Hunsberger & Eliasmith, 2016).

Here’s an example using nengo_dl to train serial connections. I’m working on adding more examples on how to use sim.train to the documentation, should be up in a week or so.

import nengo
import nengo_dl
import numpy as np
import tensorflow as tf

with nengo.Network(seed=0) as net:
    # these parameter settings aren't necessary, but they set things up in
    # a more standard machine learning way, for familiarity
    net.config[nengo.Ensemble].neuron_type = nengo.RectifiedLinear()
    net.config[nengo.Ensemble].gain = nengo.dists.Choice([1])
    net.config[nengo.Ensemble].bias = nengo.dists.Uniform(-1, 1)
    net.config[nengo.Connection].synapse = None

    # connect up our input node, and 3 ensembles in series
    a = nengo.Node([0.5])
    b = nengo.Ensemble(30, 1)
    c = nengo.Ensemble(30, 1)
    d = nengo.Ensemble(30, 1)
    nengo.Connection(a, b)
    nengo.Connection(b, c)
    nengo.Connection(c, d)

    # define our outputs with a probe on the last ensemble in the chain
    p = nengo.Probe(d)

n_steps = 5  # the number of simulation steps we want to run our model for
mini_size = 10 # minibatch size

with nengo_dl.Simulator(net, step_blocks=n_steps, minibatch_size=mini_size,
                        device="/cpu:0") as sim:
    # create input/target data. this could be whatever we want, but here we'll
    # train the network to output 2x its input
    input_data = np.random.uniform(-1, 1, size=(10000, n_steps, 1))
    target_data = input_data * 2

    # train the model, passing `input_data` to our input node `a` and
    # `target_data` to our output probe `p`. we can use whatever tensorflow
    # optimizer we want here.
    sim.train({a: input_data}, {p: target_data},
              tf.train.MomentumOptimizer(1e-2, 0.9), n_epochs=10)

    # run the model to see the results of the training. note that this will
    # use the input values specified in our `nengo.Node` definition above (0.5)
    sim.run_steps(n_steps)

    # so the output should be 1
    print(sim.data[p][0])

    # or if we wanted to see the performance on a test dataset, we could do
    test_data = np.random.uniform(-1, 1, size=(mini_size, n_steps, 1))
    sim.run_steps(n_steps, input_feeds={a: test_data})

    assert np.allclose(test_data * 2, sim.data[p][:, n_steps:], atol=1e-2)

There is now more detailed documentation on optimizing networks in NengoDL at https://nengo.github.io/nengo_dl/training.html