[Nengo DL]: Building a ConvNet with kernel regularizers, dropout, and max pooling layers

Thanks @Eric for a detailed response. With respect to kernel regularization all my doubts for now stand resolved. I am yet to have access to Loihi but when I do and in case I run into issues, I will ping here.

However, I have following two questions with respect to MaxPooling equivalent in Nengo-DL.

1>

Since the network is simulated for n_steps milliseconds and the spiking neurons output spikes (during the interval of n_steps), cannot we simply calculate firing rates (i.e. # spikes / n_steps) and then take a max of those values at the end of n_steps? Or is it that the calculation of max function is not possible with spiking neurons? I am probably missing something fundamental here…

2> If calculation of max function isn’t possible with spiking neurons, and there’s another function which is similar to max operation and it can be calculated with spiking neurons, how do we incorporate its implementation in Nengo-DL converted network?

Please let me know.