Integrating nengo model to DL model


I converted a simple Keras model with
nengo_converter = nengo_dl.Converter(…)

now I want to add a new Nengo network to the flow.
the steps are :

  1. Input is fed into the Keras network (,
  2. the result of the Keras DL network will be fed into my pure nengo network

I’m not sure how to wire these two networks together
how to feed the nengo_converter with an input node and how to connect the result (the prediction of the Keras model) to my nengo network

Thanks in advance

Hi @hadarcd, and welcome to the Nengo forums. :smiley:

If you want to insert an entire Keras model into a Nengo network, the most straightforward option would be to wrap it in a nengo_dl.TensorNode node, as done in this example. The nengo_dl.TensorNode node behaves like regular nengo.Nodes, and you can use the standard nengo.Connections to connect to and from them.

If you really want to use the NengoDL converter, it should (in theory) be possible to connect to it like you would a regular Nengo network. I should note that I haven’t implemented such a network myself, so I’d need to do some exploratory coding to see if that is indeed possible. I’ll keep you posted on this. :smiley:

The first dl network was converted to SNN as in the Keras to SNN tutorial with nengo_dl.Converter(),
and I trained this network.
I want to use the predicted output from this trained network to be the input signal for a nengo spiking network.
Is it possible to connect them and run them together as an end-to-end flow?

Yes! You can! :smiley:

The Simplest Implementation
Since the .net attribute of the NengoDL converter object is a standard Nengo network, it can be used the same way as other Nengo networks. The simplest implementation would be to simply add your additional Nengo objects directly to the converter’s .net, like so:

# Apply the NengoDL converter on a Keras model
converter = nengo_dl.Converter(keras_model, ...)

# Use the `with` block with the object to add Nengo objects to that network
    # Add a 1000 neuron, 10-dimensional ensemble to the converter Nengo network
    ens1 = nengo.Ensemble(1000, 10)  

Connecting to layers within the original converter network is straightforward as well. Using the Keras-to-SNN tutorial converter network as an example, if you wanted to connect the dense layer of the to the ens1 ensemble created above, you’ll need to do this:

    nengo.Connection(converter.layers[dense], ens1)

One other thing you’ll need to do is to modify the Nengo node that NengoDL automatically creates for the input layer. When the converter is run, any tf.keras.layer.Input layers get converter to nengo.Nodes with a fixed output of zeros. When you run the network using the NengoDL simulator, you can override this default output by using the data parameter in the function:

with nengo_dl.Simulator( as sim:, data={inp: sim_data})

This feature is only available to NengoDL, so for a standard Nengo simulator, we have to modify the output of the input node before creating the simulator object. In the code snippet below, the PresentInput process is used to present each element of sim_data for presentation_time seconds.

    # Use the PresentInput process to present each element in `sim_data` for `dt` time.
    converter.layers[inp].output = nengo.processes.PresentInput(sim_data, presentation_time=dt)

with nengo.Simulator(, dt=dt) as sim:

Note: For the Keras-to-SNN tutorial network, the input is an MNIST digit, so here’s what the PresentInput process would look like to present one MNIST digit from the test_images set every 0.3s:

nengo.processes.PresentInput(test_images, presentation_time=0.3)

And that’s basically it! With all of this information, you should be able to modify the network and add in your spiking Nengo network, and connect them together.

Working with Saved Model Data
The Keras-to-SNN tutorial uses the sim.save_params and sim.load_params functions to save and load model data to ensure that the weight changes made during the training process are actually being used when the network is being simulated. While I was experimenting with the code to write up this post, I ran into some potential issues, so I’ll discuss them here.

  1. Because the tutorial uses the NengoDL simulator to do the training for the Keras model, you need to use the NengoDL simulator to load up the saved model parameters. However, because the model parameters are only contained within the NengoDL simulator context block, you need to use the freeze_params method to transfer the loaded weights back to the converter network. The full code to do this is:
# The training code, for context
if do_train:
    with nengo_dl.Simulator(...) as sim:

# Use the NengoDL converter on the Keras model
converter = nengo_dl.Converter(keras_model, ...)

# The converter network is a blank slate at this point, so,
# create a NengoDL simulator to load up the saved parameters
with nengo_dl.Simulator( as sim:
    # Load the saved parameters into the `sim` simulator object
    # The loaded parameters exists only in this NengoDL simulator instance,
    # so, we need to transfer them back to the network with
    # the `freeze_params` method

# Now we can do whatever we want with the network 
    # Do stuff
  1. The sim.load_params function only works if the network used when calling save_params has the exact same structure. In the example code above, we added a new ensemble (ens1) to the converter network. If you are working with saved model parameters, this has to be done after the model data is loaded:
converter = nengo_dl.Converter(keras_model, ...)

    ens1 = nengo.Ensemble(100, 10)

with nengo_dl.Simulator( as sim:
converter = nengo_dl.Converter(keras_model, ...)

with nengo_dl.Simulator( as sim:

# Make custom modifications to the converter network after the `freeze_params` call
    ens1 = nengo.Ensemble(100, 10)

More Complex Implementations
There are some caveats to the method I described above for integrating a NengoDL converter network with a regular Nengo network.

  • The regular Nengo network being integrated with the converter network by modifying the converter network directly. This is not always desired if you have multiple subnetworks working together.
  • The example code I provided above does not go into detail about how to connect to the converter network. In the Keras-to-SNN tutorial (and in your use case), the converter network is the “head” of the chain, meaning nothing connects to it, and changing the input node’s output function is sufficient. However, there will be instances where the converter network needs to be inserted in between multiple Nengo networks.

In both of these cases, more work needs to be done to perform the integration, but it is possible to do so.

For the sake of completeness for this post, here is some example code that you can play with:

1 Like

Thank you for the detailed explanation.

I have another question regarding the nengo_dl.Converter:

Is it possible to run the network on inputs of different sizes?
for instance, I use this shape for the network input:
inp = tf.keras.Input(shape=(70, 100))
I want to make a prediction on a new image with different dimensions.
in Keras, I can define tf.keras.Input(shape=(None, None)) , but I wasn’t able to convert it with nengo_dl.Converter

If I understand Keras correctly, you can use dynamic input shapes in the Keras model definition, but they get set when you train, fit or evaluate the model. As such (and I might be wrong on this), if you have trained a Keras model on one input shape, even if you defined the input as tf.keras.Input(shape=(None, None)), you cannot then use a differently shaped input to evaluate the trained Keras model.

The NengoDL converter is similar to the training / fitting / evaluation operation for Keras model. I.e., it needs static (i.e., not dynamic) input and output shapes to function properly. It converts Keras models into “physical” Nengo models, and unlike Keras, Nengo objects need to have statically defined input and output sizes when they are created, since Nengo works with the premise that all input and output shapes are defined with the network.