Layer X is ambiguous because it has multiple output tensors in Keras sequential model

Hello,
I am trying to load an existing Keras model from a file, convert it into spiking Nengo model with the Nengo-dl Converter class and then train and evaluate it on the MNIST dataset.

Everything works fine if the loaded model is created with Keras functional API. However, when the model is sequential, I get an exception when I try to create a probe on the converted network.
Here is the relevant part of my code:

model = tf.keras.models.load_model(model_path)
# convert keras model to nengo network
converter = nengo_dl.Converter(model)
with converter.net:
    output_p = converter.outputs[model.get_layer(name='out_layer')]

Here is the raised error:
KeyError: ‘Layer <tensorflow.python.keras.layers.core.Activation object at 0x7f77c8ab0c18> is ambiguous because it has multiple output tensors; use a specific tensor as key instead’

I tried using output_p = converter.outputs[model.get_layer(name='out_layer').output] to specify a tensor but that raised another error :
KeyError: <Reference wrapping <tf.Tensor 'out_layer_9/Identity:0' shape=(None, 10) dtype=float32>>

How can I get a probe on a layer from a sequential model?

Hi kalivoda,

The problem is that when you pass your model to the nengo_dl.Converter() it converts the sequential model into a functional model. From that point you will need to be referencing that converted model, not the original.

If you want to just reference the output layer it is a little simpler, because the output layer is automatically made into a probe. You can get that probe with…

output_probe = converter.outputs[converter.model.output]

If you want to probe a different layer then you can use your original model layers as the key to the converter.layers

layer_probe = nengo.Probe(converter.layers[model.layers[YOUR_LAYER_NUM].get_output_at(YOUR_TENSOR_NUMBER)])

The converter uses the same layer objects as the original model, so your output tensors will double. However, the ones you want to access will always be the ones at the end of the tensor list. So if your original layer only has one output tensor, then when get your output probe you can reference the [-1] entry.

I’ve added a simple script showing how to access the output probe, and a probe on the second layer.

import nengo
import nengo_dl
import tensorflow as tf
# create your keras model
model = tf.keras.Sequential(
    [
        tf.keras.layers.Dense(units=64, activation="relu", input_shape=(784,)),
        tf.keras.layers.Dense(units=64, activation="relu"),
        tf.keras.layers.Dense(units=10, activation="relu"),
    ]
)
# convert to nengo_dl and swap activations to spiking ones
converter = nengo_dl.Converter(model, swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()})

with converter.net as net:
    # probe on our output layer
    probe_output = converter.outputs[converter.model.output]
    # probe on our second layer
    probe_layer = nengo.Probe(converter.layers[model.layers[1].get_output_at(-1)])

with nengo.Simulator(converter.net) as sim:
    # run our simulation for 1 sec
    sim.run(1)
    # print our probe data
    print(sim.data[probe_output])
    print(sim.data[probe_layer])

One final note…you mention you want to run a spiking model, to do this you will have to swap your activation functions with spiking versions. I did this in the above example. There will be a few more steps you may need to take to get the same performance. We recently updated our examples for this exact purpose, so I’ll forward you there for some more details.

https://www.nengo.ai/nengo-dl/examples/keras-to-snn.html