Hello @CodeHelp1, you are on the right path. Once you have trained your model i.e. model.fit(...)
or model.fit_generator(...)
, you can simply convert it as follows:
sfr = 20
ndl_model = nengo_dl.Converter(
model,
swap_activations={
tf.keras.activations.relu: nengo.SpikingRectifiedLinear()},
scale_firing_rates=sfr,
synapse=0.005,
inference_only=True)
and then create the test data accordingly i.e. tile each of the test samples for n_steps
(the presentation time to the spiking Nengo-DL network). This article might give you more context with your approach.
There’s another approach too where you can… after creating a TF model:
model = tf.keras.Model(inputs=input, outputs=dense1)
simply convert it and train it in Nengo-DL context (instead of TF which is your current approach) as follows:
converter = nengo_dl.Converter(
model,
swap_activations={tf.nn.relu: nengo.RectifiedLinear()},
)
net = converter.net
nengo_input = converter.inputs[input]
nengo_output = converter.outputs[dense1]
# run training
with nengo_dl.Simulator(net, seed=0) as sim:
sim.compile(
optimizer=tf.optimizers.RMSprop(0.001),
loss={nengo_output: tf.losses.SparseCategoricalCrossentropy(from_logits=True)},
)
sim.fit(train_images, {nengo_output: train_labels}, epochs=10)
# save the parameters to file
sim.save_params("mnist_params")
Note that the above training is still done with rate neurons (nengo.RectifiedLinear()
) and not the spiking neurons (nengo.SpikingRectifiedLinear()
). Once done with training (and in this example, saving weights… which you don’t necessarily need to unless looking for some flexibility), you can once again proceed with inference with proper test data arguments as follows:
converter = nengo_dl.Converter(
model,
swap_activations={tf.nn.relu: nengo.SpikingRectifiedLinear()},
)
net = converter.net
nengo_input = converter.inputs[input]
nengo_output = converter.outputs[dense1]
with nengo_dl.Simulator(net) as sim:
sim.load_params("mnist_params")
data = sim.predict({nengo_input: test_images[:n_test]})
BTW, model.fit(...)
can also work with data generators. All the above code snippets are from the articles I have linked to give you the exact context of the next steps, and you might need to code up some extra for data handling. Also note that both the articles use Functional API of TF to create a model and not the Sequential API which you have used. I don’t think this should cause any issue, but in case it does, switch to Functional API and let us know if you still face problems in model conversion or execution.