[Nengo-DL]: Adding a dynamic custom layer between two Ensembles

The input to a Nengo node is always a 1-dimensional (flat) array. So, whatever function you implement for that node will need to handle that flat array. As for input mapping, you can use np.flatten to flatten an example multi-dimensional array into a flat array to see how the mapping is achieved. Alternatively, you can j use the np.reshape function to reshape the flat array back into a multi-dimensional array, and then do the max-pooling operation on that. This stack overflow thread discusses some of the approaches to using numpy operations to do the max-pooling operation.

The connections in the connection list is ordered by order of creation. And from my quick glance at it, it seems to be respecting that.

I have to double check the code, but if I recall correctly, I believe this is the case. If your ndl_model is NengoDL converter object, it contains references to the Nengo equivalents to each Keras layers (i.e., using .layers), as well as direct references to the Nengo model objects (i.e., using .net.ensembles).

For NengoDL converted networks, I wouldn’t use the decoded_output of any ensembles. Remember that for NengoDL models, encoders and decoders don’t really make much sense because the NEF hasn’t been used to optimize the connection weights.

I think this depends on what you have set for the synapse parameter when you create the NengoDL converter. If you leave the synapse value at None, the NengoDL converter will make a non-filtered connection between the ensemble and your custom node. This would mean that your node would receive spiking input instead of filtered ones.

As a side note, I discuss some other options for implementing the MaxPool operator using a custom Layer builder class in this forum thread. Not sure if that would be helpful to you.