I am currently stuck at passing the training data in custom batches to the Nengo-DL Simulator. Actually I am coming from this following related problem. I was of the opinion that I can train the network in TF-Keras, save the intermediate weights, load them again (in the same TF-Keras environment) in a
model, and then use nengo_dl.Converter() to convert the trained
model to a spiking model, and then proceed with inference in the simulator. But looks like it doesn’t happen this way. Can someone please shed some information about this? From the tutorial, it seems that nengo-dl lib expects the weights to be probably obtained after training a spiking model in its own environment (not in TF-Keras).
Therefore, I converted the TF-Keras model architecture and attempted to train it using nengo-dl (as done in the tutorials). Please note that I cannot use the
minibatch_size option in Simulator since my training dataset is too large to be loaded wholly in the memory, thus… my custom batches. This API doc presents the option to pass a custom generator (of training data and labels) to
x, however I did not find any related example to do it. Can someone please show me how to code it?
For reference, this is how my generator looks like:
def get_batches(): for start in range(0, X_train_resh.shape, 128): end = min(start+128, X_train_resh.shape) yield (X_train_resh[start:end], targets_train_resh[start:end])
Please let me know if you need my model training code (in nengo-dl). Thanks!