Hi @arvoelke,
Thank you very much for the detailed explanation. After adding the codes you attached above, it seems probably nengo_dl.Converter does not replace original relu activation functions with spiking relu specified by swap_activations argument. The following is the code segment for conversion, and model is a keras model object defined with tf.nn.relu activation functions.
nengo_converter = nengo_dl.Converter(model, allow_fallback=False, swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()})
assert nengo_converter.verify()
net = nengo_converter.net
for ensemble in net.ensembles: print(ensemble, ensemble.neuron_type)
And the output is:
<Ensemble “conv2d.0”> RectifiedLinear()
<Ensemble “conv2d_1.0”> RectifiedLinear()
<Ensemble “conv2d_2.0”> RectifiedLinear()
<Ensemble “conv2d_3.0”> RectifiedLinear()
<Ensemble “conv2d_4.0”> RectifiedLinear()
<Ensemble “conv2d_5.0”> RectifiedLinear()
<Ensemble “conv2d_6.0”> RectifiedLinear()
<Ensemble “conv2d_7.0”> RectifiedLinear()
<Ensemble “conv2d_8.0”> RectifiedLinear()
<Ensemble “conv2d_9.0”> RectifiedLinear()
<Ensemble “dense.0”> RectifiedLinear()
I was wondering if it is the reason why the loihi simulator cannot work. Could you let me know if I can work around this issue? Thank you very much.
Will