Hello everyone,
I am learning to execute SNNs on Loihi chips and have been able to execute the tutorial “Converting a Keras model to an SNN on Loihi” on Nahuku32 board. I have two questions.
1> When we train the converted Nengo-DL network with LoihiSpikingRectifiedLinear()
neurons as done below (in cell 12):
train(
params_file="./keras_to_loihi_loihineuron_params",
epochs=2,
swap_activations={tf.nn.relu: nengo_loihi.neurons.LoihiSpikingRectifiedLinear()},
scale_firing_rates=100,
)
how does the training proceed? The neurons are spiking, so does NengoDL
use surrogate gradient descent, or any other form of training, or does it use a “rate” version of LoihiSpikingRectifiedLinear()
neuron to train the network? I do know that when we use nengo.LIF()
neurons to train the NengoDL network, it switches the nengo.LIF()
with nengo.LIFRate()
while training (to approximate the dynamics of LIF neurons).
2> I changed the network to have a MaxPool layer between the two Conv Layers. Architecture: Input -> Conv (spike generator) -> Conv -> MaxPool -> Conv -> Dense -> Dense
and after doing the necessary configurations, when I execute my network with nengo_loihi.Simulator()
with target
set to “sim”, I get the following error:
----> 1 with nengo_loihi.Simulator(net_2, target="sim") as loihi_sim:
.
.
.
BuildError: Conv2D transforms not supported for off-chip to on-chip connections where `pre` is not a Neurons object.
which I believe is coming due to the pre
object being a Node (i.e. the MaxPooling layer). I was under the assumption that the MaxPooling TensorNode
s (or Node
s here in case of NengoLoihi) should be executed on GPU and then NengoLoihi should take care of mapping the Conv transform op back onto the chips/board. Any ideas on how to fix it? Or is it not at all supported in NengoLoihi?
Please let me know!