Replace output of hidden layer with nendo_dl

Hi all,
I’m trying to reproduce the results from the paper=> [2109.13208] Spiking neural networks trained via proxy
Is it possible to replace the output activity of a given layer with random values using nengo_dl before backward propagation?

You’d have to write your own custom neuron type (which would allow you to add custom logic for the backwards pass). You can see, for example, how the ReLU neuron type is implemented here nengo-dl/neuron_builders.py at master · nengo/nengo-dl · GitHub. You’d want to start from something like that, but wrap the neuron operations in a tf.custom_gradient, which is where you would implement your random values on the backward pass.