Hello, guys

I’m optimizing an SNN using nengoDL and the TensorFlow approach and the Adam alg. In the training phase I understand that the algorithm is using SoftLifRate for finding the optimal weights. And in the predict phase it is using the LIF neuron itself.

I have read the algorithm of LIF implemented in Nengo, and I got two doubts.

1- In the following part of the algorithm the output of the neurons that spike is divided by dt and multiplied by the amplitude. I do not understand why

```
# determine which neurons spiked (set them to 1/dt, else 0)
spiked_mask = voltage > 1
output[:] = spiked_mask * (self.amplitude / dt)
```

2 - In my code I have 3 layers (inp, layer A, layer B, and output), kinda simular to the example in: Optimizing a spiking neural network — NengoDL 3.6.1.dev0 docs, and I have already trained, so I’m in predict part.

I have read that if I add a probe in the output layer with a synapse (filter), the spiking train of the output will be smoothed and then multiplied by the optimal weights finding during train. So I’m assuming this is the output of network over time.

My doubt is: Are the output of the hidden layers also smoothed by the synapse filter ?

If they are not smoothed, then the output from a hidden layer to a neuron_i that spiked is: (amplitude/dt) * weights?

Thank you so much