Training nengoDL inside nengo network

I have a nengo network which consists of a few neural circuits representing the pre-motor cortex, primary motor cortex, and an arm. The pre-motor cortex forms target angles for the arm joints and the motor cortex calculates the torques to apply. Now I want to integrate an artificial neural network that will be trained to predict arm movement given the output of the pre-motor cortex (PMC). I created a two-layer fully connected network as follows:

        hidden = nengo_dl.Layer(tf.keras.layers.Dense(units=128, activation=tf.nn.relu))(
                net.PMC.output
        )
        out = nengo_dl.Layer(tf.keras.layers.Dense(units=dim))(hidden)

        # add a probe to collect output
        out_p = nengo.Probe(out)

Now I want to provide many inputs to the PMC and backpropagate the error between the predicted arm position and the actual simulated arm position in order to optimize this network. I already have the code to run the simulation with various random arm targets. However, I don’t know how to optimize the network. I was thinking something like this:

    with nengo_dl.Simulator(net) as sim:
        sim.compile(loss={out: nengo_dl.losses.nan_mse})
        sim.compile(
            optimizer=tf.optimizers.RMSprop(1e-4),
            loss=nengo_dl.losses.nan_mse,
        )
        sim.fit(train_inputs, train_targets, epochs=10)
        sim.compile(loss={output_probe: nengo_dl.losses.nan_mse})

using nengo_dl.nan_mse, But I don’t know what my train_inputs and train_targets would be. I wonder if I can get it to backpropagate simulated error in real time then save the weights, or if I should be using a nengo.Probe to save the positions of the arm and the PMC outputs so that then I can train this network separately? If the second option makes more sense, I don’t understand how to go about saving probe data and how I would then train this part of the network separate from my simulation of the rest of the network. Thanks, and please let me know if I can clarify.

Hi @Luciano, and thanks for posting this question on the Nengo forums! :smiley:

Regarding your question, I think some more clarity about what you are trying to achieve will help us determine what the best approach to take is. From your description, I understand that you have 3 components to your system, the MC (primary motor cortex), the PMC (pre-motor cortex), and an arm, and they are connected up like so? MC -> PMC -> arm

What you are trying to accomplish is to train the network such that if you feed a target location to the MC, the arm will move to that target location? My question is this: do you already know the outputs of the PMC that is mapped onto the desired arm locations? If you already have this mapping, the train_inputs would be the output of the PMC, and train_targets would be the associated target arm positions.

Another question about your problem would be whether or not your arm simulation simulates the time it takes for the arm to move to the target location. My suggestion above would be in the case where a specific PMC output can be mapped onto a specific arm position (or configuration). However, if you want to map the PMC output to a range of arm positions (i.e., the trajectory the arm has to take to reach the associated arm target), then a different approach to the setting up the training signal is needed.

With respect to your post in general, were you also inquiring about how to train a NengoDL network within an existing Nengo network? You can accomplish this using NengoDL simulator’s freeze_params method. As an example, suppose your network was something like this:

with nengo.Network() as net:
    MC = nengo.Ensemble(...)
    PMC = nengo.Ensemble(...)    

    # PMC-to-arm network
    with nengo.Network() as pmc2arm:
        hidden = nengo_dl.Layer(...)(net.PMC)
        out = nengo_dl.Layer(...)(hidden)
        # add a probe to collect output
        out_p = nengo.Probe(out)

# Train pmc2arm network
with nengo_dl.Simulator(pmc2arm) as sim:
    sim.compile(optimizer=tf.optimizers.RMSprop(1e-4), 
                loss=nengo_dl.losses.nan_mse)
    sim.fit(train_inputs, train_targets, epochs=10)
    sim.freeze_params(pmc2arm)

# Run nengo model
with nengo.Simulator(model) as sim:
    sim.run(1)

Thanks for the reply, to clarify the PMC outputs to the motor cortex, which in turn controls the arm. I am trying to create a tensorflow network that will predict the movement of the arm (joint angles over time) given the PMC output. Thus this network will be emulating the motor cortex. This is the code I’m starting with. In this code there are targets that change every few seconds and a virtual arm moves towards them, I am visualizing it in the nengo gui. I’m not sure how to connect and train this network. I was thinking if I record the PMC output and the position of the arm, I could do some sort of temporal binning then train. But I’m not sure how to log these pieces of information. Your suggestion seems to be to train directly within nengo, which would be easier, but I’m not sure how to properly obtain the train_inputs and train_targets and how to optimize these while a simulation is running, since my data is created during the simulation of various reaching events which happens when I run the code.

Hi Luciano,

Sounds like an interesting project! Sorry to clarify again but want to make sure we fully understand the goal. Given the current state of the arm and the target angles specified by the PMC you want to make another population of neurons that can predict what the feedback from the arm will be at the next time step? Which is effectively learning an internal model of how the system responds to different target angles given different arm states. Is that correct?

Hi Travis,

Thanks for the response and making your code available! Actually, I want to predict the movement of the arm from the output (target angles) of the PMC. If you’re curious, my goal is to use your code to simulate the neural coprocessor as described here. This is the emulator network.

Right now, I’m leaning towards offline training; I can run one of your reaching simulations for a long time, probing and saving the output from the PMC and the arm position. Then I should be able to do some sort of time binning and train offline and then load the parameters back into Nengo.

Oh very neat!

If I’m understanding the system correctly from a quick scan, the idea is that there’s been a severed connection between parts of the brain generating the high level command (say PMC) and the parts of the brain actually carrying out the command (M1), and the NCP is learning given PMC input how to stimulate M1 to carry out the desired action. The emulator network is learning what activity in M1 maps to what movements of the arm and hand, which is then used to train the NCP. Is that correct?

This is a slightly different set up than I’m understanding in previous messages, in that we’d be recording from M1 and learning arm movement, instead of recording output from PMC and mapping to arm movement.

You’re right, I made a mistake. I was trying to record from PMC but I realize now I should be recording from M1. Then the second network would be stimulating M1.

OK great. So then what you can do is set up a probe on the neurons of the M1 population and the arm movement:

    probe_m1_neurons = nengo.Probe(m1_ens.neurons)
    probe_arm_state =  nengo.Probe(arm_state)

then save the results (sim.data[probe_m1_neurons] and sim.data[probe_arm_state]) to file and you can use those as your train_inputs and train_targets, respectively. I don’t think you’ll need to do any time binning, you should be able to just use the data recorded straight from the neurons. Could look at adding a synapse from the input to the first layer of the network, but I’d try the simplest first!

Thanks, I was able to train the network offline. However, I’m having trouble integrating it into the nengo network for realtime prediction. I’m using a KerasWrapper and TensorNode as described in the tutorial:

# load the emulator network
model = load_model('tensorflow/models/emulator_network')
EN = KerasWrapper(model)

# create a TensorNode and pass it the new layer
en_node = nengo_dl.TensorNode(
    EN,
    shape_in=(100, 1000),  # 100 ms * 1000 M1 neurons
    shape_out=(dim*2 + 2,),  # output will be angle and velocity for each joint and (x,y) hand coordinates
    pass_time=False,  # this node doesn't require time as input
)

# connect up our input to our keras node
nengo.Connection(net.M1.M1.neurons, en_node, synapse=None)

I trained a recurrent neural network that takes a 100 ms window of M1 activity and predicts arm position at the end of that window, but I don’t know how to provide 100 ms of input to the TensorNode instead of just the current timestep. I was hoping to have this network output to a second virtual arm and to be able to compare the real vs predicted arm movements in real time. I also later wanted to stimulate in real time using a second tensorflow network, all within Nengo but I’m not sure how/if this is possible. I can’t do it all offline because I will be stimulating later and probably introducing plasticity too. Right now I get this error in Nengo GUI:

  File "model_code/REACH/framework.py", line 67, in generate
    nengo.Connection(net.M1.M1.neurons, en_node, synapse=None)
  File "/Users/luciano/Library/Python/3.8/lib/python/site-packages/nengo/base.py", line 34, in __call__
    inst.__init__(*args, **kwargs)
  File "/Users/luciano/Library/Python/3.8/lib/python/site-packages/nengo/connection.py", line 498, in __init__
    self.transform = transform  # Must be set after function
  File "/Users/luciano/Library/Python/3.8/lib/python/site-packages/nengo/base.py", line 108, in __setattr__
    super().__setattr__(name, val)
  File "/Users/luciano/Library/Python/3.8/lib/python/site-packages/nengo/config.py", line 492, in __setattr__
    raise exc_info[1].with_traceback(None)
nengo.exceptions.ValidationError: init: Shape of initial value () does not match expected shape (100000, 1000)

Any idea how I can do this? Thanks!

Hi Luciano,

I don’t think there should be any issues doing what you’re describing. To pass in 100ms window of M1 activity to the TensorNode you can create a nengo.Node that is connected to the M1 neurons and stores 100 time steps of activity and passes it to the TensorNode. Something like

model.window = np.zeros((100, 1000))
def window_node_func(t, x):
    model.window[:99] = model.window[1:]
    model.window[-1] = x
    return model.window.flatten()
window_node = nengo.Node(output=window_node_func, size_in=1000, size_out=100*1000)

nengo.Connection(net.M1.m1.neurons, window_node)
nengo.Connection(window_node, en_node)

And you can for sure also have the output of en_node go to a second virtual arm, just copy the code for instantiating the first arm. Then you’ll want to set the joint angles on that second arm directly based on the output from en_node, right? Which you can do with another simple node like

def set_arm_angles_func(t, x):
    model.arm2.q = x
set_arm_angles = nengo.Node(output=set_arm_angles_func, size_in=3)
nengo.Connection(en_node, set_arm_angles)

Note that model.arm2.q = x might not be the correct way to set the angles directly … but IIRC it’s close.

1 Like