Yes! You can!
The Simplest Implementation
Since the .net
attribute of the NengoDL converter object is a standard Nengo network, it can be used the same way as other Nengo networks. The simplest implementation would be to simply add your additional Nengo objects directly to the converter’s .net
, like so:
# Apply the NengoDL converter on a Keras model
converter = nengo_dl.Converter(keras_model, ...)
# Use the `with` block with the converter.net object to add Nengo objects to that network
with converter.net:
# Add a 1000 neuron, 10-dimensional ensemble to the converter Nengo network
ens1 = nengo.Ensemble(1000, 10)
Connecting to layers within the original converter network is straightforward as well. Using the Keras-to-SNN tutorial converter network as an example, if you wanted to connect the dense
layer of the converter.net
to the ens1
ensemble created above, you’ll need to do this:
with converter.net:
nengo.Connection(converter.layers[dense], ens1)
One other thing you’ll need to do is to modify the Nengo node that NengoDL automatically creates for the input
layer. When the converter is run, any tf.keras.layer.Input
layers get converter to nengo.Node
s with a fixed output of zeros. When you run the network using the NengoDL simulator, you can override this default output by using the data
parameter in the sim.run
function:
with nengo_dl.Simulator(converter.net) as sim:
sim.run(1, data={inp: sim_data})
This feature is only available to NengoDL, so for a standard Nengo simulator, we have to modify the output of the input
node before creating the simulator object. In the code snippet below, the PresentInput
process is used to present each element of sim_data
for presentation_time
seconds.
with converter.net:
# Use the PresentInput process to present each element in `sim_data` for `dt` time.
converter.layers[inp].output = nengo.processes.PresentInput(sim_data, presentation_time=dt)
with nengo.Simulator(converter.net, dt=dt) as sim:
sim.run(1)
Note: For the Keras-to-SNN tutorial network, the input is an MNIST digit, so here’s what the PresentInput
process would look like to present one MNIST digit from the test_images
set every 0.3s:
nengo.processes.PresentInput(test_images, presentation_time=0.3)
And that’s basically it! With all of this information, you should be able to modify the converter.net
network and add in your spiking Nengo network, and connect them together.
Working with Saved Model Data
The Keras-to-SNN tutorial uses the sim.save_params
and sim.load_params
functions to save and load model data to ensure that the weight changes made during the training process are actually being used when the network is being simulated. While I was experimenting with the code to write up this post, I ran into some potential issues, so I’ll discuss them here.
- Because the tutorial uses the NengoDL simulator to do the training for the Keras model, you need to use the NengoDL simulator to load up the saved model parameters. However, because the model parameters are only contained within the NengoDL simulator context block, you need to use the
freeze_params
method to transfer the loaded weights back to the converter network. The full code to do this is:
# The training code, for context
if do_train:
with nengo_dl.Simulator(...) as sim:
sim.compile(...)
sim.fit(...)
sim.save_params("./keras_to_snn_params")
# Use the NengoDL converter on the Keras model
converter = nengo_dl.Converter(keras_model, ...)
# The converter network is a blank slate at this point, so,
# create a NengoDL simulator to load up the saved parameters
with nengo_dl.Simulator(converter.net) as sim:
# Load the saved parameters into the `sim` simulator object
sim.load_params("./keras_to_snn_params")
# The loaded parameters exists only in this NengoDL simulator instance,
# so, we need to transfer them back to the converter.net network with
# the `freeze_params` method
sim.freeze_params(converter.net)
# Now we can do whatever we want with the converter.net network
with converter.net:
# Do stuff
...
- The
sim.load_params
function only works if the network used when calling save_params
has the exact same structure. In the example code above, we added a new ensemble (ens1
) to the converter network. If you are working with saved model parameters, this has to be done after the model data is loaded:
# ## THIS WILL NOT WORK ##
converter = nengo_dl.Converter(keras_model, ...)
with converter.net:
ens1 = nengo.Ensemble(100, 10)
with nengo_dl.Simulator(converter.net) as sim:
sim.load_params("./keras_to_snn_params")
sim.freeze_params(converter.net)
# ## DO THIS INSTEAD ##
converter = nengo_dl.Converter(keras_model, ...)
with nengo_dl.Simulator(converter.net) as sim:
sim.load_params("./keras_to_snn_params")
sim.freeze_params(converter.net)
# Make custom modifications to the converter network after the `freeze_params` call
with converter.net:
ens1 = nengo.Ensemble(100, 10)
More Complex Implementations
There are some caveats to the method I described above for integrating a NengoDL converter network with a regular Nengo network.
- The regular Nengo network being integrated with the converter network by modifying the converter network directly. This is not always desired if you have multiple subnetworks working together.
- The example code I provided above does not go into detail about how to connect to the converter network. In the Keras-to-SNN tutorial (and in your use case), the converter network is the “head” of the chain, meaning nothing connects to it, and changing the input node’s
output
function is sufficient. However, there will be instances where the converter network needs to be inserted in between multiple Nengo networks.
In both of these cases, more work needs to be done to perform the integration, but it is possible to do so.
For the sake of completeness for this post, here is some example code that you can play with: