Noise during training?

Hi! I am working on a project that tries to relate the effects of stochastic spiking to dropout in rate based networks. Specifically I am investigating MC or variational dropout that allows to quantify uncertainty in the output by keeping dropout active during test time. In my experiments I have noticed that even when I train a nengoDL model without dropout I observe an MC dropout effect during testing that I would only expect when trained with dropout - as only then the model should learn a distribution over models/weights. After being puzzled for some time I read in the paper “Spiking Deep Networks with LIF Neurons” that during training noise is injected to improve stability. I have combed the nengoDL source code for some time to see whether I can find noise being added somewhere but could not find anything. Hence my question is whether there is noise added somewhere when training a spiking network with the soft LIF approximation without being explicitly stated anywhere. I only use tensorflow nodes in my model. On that note, is there anything else going on under the hood for stability or optimisation purposes that one should be aware of? I hope the question is not too vague and someone can help me to clarify this issue.

Hello! No, NengoDL does not inject any noise automatically, that would be up to the user to add if they wanted.

The only thing that comes to mind that is happening under the hood that might impact your results is the automatic swapping between rate-based and spiking neuron models. For example, if you create a model with nengo.LIF neurons, then during training those will be automatically swapped to nengo.LIFRate neurons. Not sure if that is relevant to whatever analysis you’re doing, but that’s the only thing I can think of. What is it about your data that makes you think there is a dropout effect?

Thanks for your answer.
Monte Carlo dropout is a way to obtain uncertainties in the predictions during test time by taking the variance in multiple forward passes with different dropout masks. Yarin Gal has shown that dropout essentially performs variational inference on the model weights with a Bernoulli variational distribution. So during test time one can sample from the variational posterior and obtain a mean and variance instead of a point estimate. But this should only be possible if the network is trained with dropout - or at least that is what I would expect. We want to investigate this in spiking neural networks and trained the network without dropout as a sanity check. To our surprise we observed that even when training without dropout the networks seems to give sensible uncertainty estimates at test time, so we were a bit puzzled. I thought noise injection could have been a reasonable explanation but that seems to be ruled out. All of this happens completely within the rate based setting, i.e. with nengo.LIFRate neurons - the spiking part is another story.
Since it does not seem to be anything nengo does, I should look probably again have a look into the MC dropout literature.

So what you’re observing is that when you run multiple forward passes at test time, you get different output each time? That shouldn’t be the case, the forward passes should be deterministic (assuming there is no other source of randomness in your model). Hard to say what the issue might be. One guess, are you resetting the simulation between runs? For example, if you do

sim.run(1.0)
...
sim.run(1.0)

the second call to sim.run is resuming from the terminal state of the last call to sim.run. So that means that things like neuron voltages and synaptic filters are starting from different initial conditions, so you would expect some variability in the output. You can make sure that each run is starting from the same initial state by calling sim.soft_reset, e.g.

sim.run(1.0)
...
sim.soft_reset()
sim.run(1.0)